00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 632 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3298 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.174 Using shallow fetch with depth 1 00:00:00.174 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.174 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.935 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.946 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.955 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:03.955 > git config core.sparsecheckout # timeout=10 00:00:03.966 > git read-tree -mu HEAD # timeout=10 00:00:03.981 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:03.996 Commit message: "packer: Add bios builder" 00:00:03.996 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:04.085 [Pipeline] Start of Pipeline 00:00:04.096 [Pipeline] library 00:00:04.098 Loading library shm_lib@master 00:00:04.098 Library shm_lib@master is cached. Copying from home. 00:00:04.111 [Pipeline] node 00:00:04.123 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:04.124 [Pipeline] { 00:00:04.132 [Pipeline] catchError 00:00:04.133 [Pipeline] { 00:00:04.142 [Pipeline] wrap 00:00:04.149 [Pipeline] { 00:00:04.154 [Pipeline] stage 00:00:04.156 [Pipeline] { (Prologue) 00:00:04.343 [Pipeline] sh 00:00:04.623 + logger -p user.info -t JENKINS-CI 00:00:04.638 [Pipeline] echo 00:00:04.640 Node: WFP21 00:00:04.646 [Pipeline] sh 00:00:04.942 [Pipeline] setCustomBuildProperty 00:00:04.952 [Pipeline] echo 00:00:04.953 Cleanup processes 00:00:04.957 [Pipeline] sh 00:00:05.234 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.234 1882669 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.245 [Pipeline] sh 00:00:05.523 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:05.523 ++ grep -v 'sudo pgrep' 00:00:05.523 ++ awk '{print $1}' 00:00:05.523 + sudo kill -9 00:00:05.523 + true 00:00:05.536 [Pipeline] cleanWs 00:00:05.545 [WS-CLEANUP] Deleting project workspace... 00:00:05.545 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.551 [WS-CLEANUP] done 00:00:05.554 [Pipeline] setCustomBuildProperty 00:00:05.564 [Pipeline] sh 00:00:05.842 + sudo git config --global --replace-all safe.directory '*' 00:00:05.930 [Pipeline] httpRequest 00:00:05.963 [Pipeline] echo 00:00:05.965 Sorcerer 10.211.164.101 is alive 00:00:05.974 [Pipeline] httpRequest 00:00:05.978 HttpMethod: GET 00:00:05.979 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.979 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.996 Response Code: HTTP/1.1 200 OK 00:00:05.996 Success: Status code 200 is in the accepted range: 200,404 00:00:05.997 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.732 [Pipeline] sh 00:00:08.018 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:08.065 [Pipeline] httpRequest 00:00:08.088 [Pipeline] echo 00:00:08.090 Sorcerer 10.211.164.101 is alive 00:00:08.097 [Pipeline] httpRequest 00:00:08.101 HttpMethod: GET 00:00:08.102 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:08.103 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:08.104 Response Code: HTTP/1.1 200 OK 00:00:08.104 Success: Status code 200 is in the accepted range: 200,404 00:00:08.105 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:28.455 [Pipeline] sh 00:00:28.740 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:31.294 [Pipeline] sh 00:00:31.579 + git -C spdk log --oneline -n5 00:00:31.579 dbef7efac test: fix dpdk builds on ubuntu24 00:00:31.579 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:31.579 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:31.579 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:31.579 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:31.597 [Pipeline] withCredentials 00:00:31.609 > git --version # timeout=10 00:00:31.622 > git --version # 'git version 2.39.2' 00:00:31.640 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:31.643 [Pipeline] { 00:00:31.652 [Pipeline] retry 00:00:31.655 [Pipeline] { 00:00:31.673 [Pipeline] sh 00:00:31.957 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:31.970 [Pipeline] } 00:00:31.993 [Pipeline] // retry 00:00:31.999 [Pipeline] } 00:00:32.020 [Pipeline] // withCredentials 00:00:32.030 [Pipeline] httpRequest 00:00:32.060 [Pipeline] echo 00:00:32.062 Sorcerer 10.211.164.101 is alive 00:00:32.071 [Pipeline] httpRequest 00:00:32.076 HttpMethod: GET 00:00:32.077 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.077 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:32.090 Response Code: HTTP/1.1 200 OK 00:00:32.091 Success: Status code 200 is in the accepted range: 200,404 00:00:32.091 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:46.892 [Pipeline] sh 00:00:47.178 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:48.582 [Pipeline] sh 00:00:48.864 + git -C dpdk log --oneline -n5 00:00:48.864 eeb0605f11 version: 23.11.0 00:00:48.864 238778122a doc: update release notes for 23.11 00:00:48.864 46aa6b3cfc doc: fix description of RSS features 00:00:48.864 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:48.865 7e421ae345 devtools: support skipping forbid rule check 00:00:48.875 [Pipeline] } 00:00:48.892 [Pipeline] // stage 00:00:48.900 [Pipeline] stage 00:00:48.902 [Pipeline] { (Prepare) 00:00:48.921 [Pipeline] writeFile 00:00:48.937 [Pipeline] sh 00:00:49.220 + logger -p user.info -t JENKINS-CI 00:00:49.233 [Pipeline] sh 00:00:49.517 + logger -p user.info -t JENKINS-CI 00:00:49.529 [Pipeline] sh 00:00:49.812 + cat autorun-spdk.conf 00:00:49.812 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.812 SPDK_TEST_NVMF=1 00:00:49.812 SPDK_TEST_NVME_CLI=1 00:00:49.812 SPDK_TEST_NVMF_NICS=mlx5 00:00:49.812 SPDK_RUN_UBSAN=1 00:00:49.812 NET_TYPE=phy 00:00:49.812 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:49.812 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:49.820 RUN_NIGHTLY=1 00:00:49.824 [Pipeline] readFile 00:00:49.848 [Pipeline] withEnv 00:00:49.850 [Pipeline] { 00:00:49.865 [Pipeline] sh 00:00:50.150 + set -ex 00:00:50.150 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:50.150 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:50.150 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.150 ++ SPDK_TEST_NVMF=1 00:00:50.150 ++ SPDK_TEST_NVME_CLI=1 00:00:50.150 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:50.150 ++ SPDK_RUN_UBSAN=1 00:00:50.150 ++ NET_TYPE=phy 00:00:50.150 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:50.150 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:00:50.150 ++ RUN_NIGHTLY=1 00:00:50.150 + case $SPDK_TEST_NVMF_NICS in 00:00:50.150 + DRIVERS=mlx5_ib 00:00:50.150 + [[ -n mlx5_ib ]] 00:00:50.150 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:50.150 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:56.719 rmmod: ERROR: Module irdma is not currently loaded 00:00:56.719 rmmod: ERROR: Module i40iw is not currently loaded 00:00:56.719 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:56.719 + true 00:00:56.719 + for D in $DRIVERS 00:00:56.719 + sudo modprobe mlx5_ib 00:00:56.719 + exit 0 00:00:56.728 [Pipeline] } 00:00:56.746 [Pipeline] // withEnv 00:00:56.751 [Pipeline] } 00:00:56.768 [Pipeline] // stage 00:00:56.779 [Pipeline] catchError 00:00:56.781 [Pipeline] { 00:00:56.796 [Pipeline] timeout 00:00:56.796 Timeout set to expire in 1 hr 0 min 00:00:56.798 [Pipeline] { 00:00:56.813 [Pipeline] stage 00:00:56.815 [Pipeline] { (Tests) 00:00:56.861 [Pipeline] sh 00:00:57.145 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:57.145 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:57.145 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:57.145 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:57.145 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:57.145 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:57.145 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:57.145 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:57.145 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:57.145 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:57.145 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:57.145 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:57.145 + source /etc/os-release 00:00:57.145 ++ NAME='Fedora Linux' 00:00:57.145 ++ VERSION='38 (Cloud Edition)' 00:00:57.145 ++ ID=fedora 00:00:57.145 ++ VERSION_ID=38 00:00:57.145 ++ VERSION_CODENAME= 00:00:57.145 ++ PLATFORM_ID=platform:f38 00:00:57.145 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:57.145 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:57.145 ++ LOGO=fedora-logo-icon 00:00:57.145 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:57.145 ++ HOME_URL=https://fedoraproject.org/ 00:00:57.145 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:57.145 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:57.145 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:57.145 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:57.145 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:57.145 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:57.145 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:57.145 ++ SUPPORT_END=2024-05-14 00:00:57.145 ++ VARIANT='Cloud Edition' 00:00:57.145 ++ VARIANT_ID=cloud 00:00:57.145 + uname -a 00:00:57.145 Linux spdk-wfp-21 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:57.145 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:00.443 Hugepages 00:01:00.443 node hugesize free / total 00:01:00.443 node0 1048576kB 0 / 0 00:01:00.443 node0 2048kB 0 / 0 00:01:00.443 node1 1048576kB 0 / 0 00:01:00.443 node1 2048kB 0 / 0 00:01:00.443 00:01:00.443 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.443 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:00.443 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:00.443 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:00.443 + rm -f /tmp/spdk-ld-path 00:01:00.443 + source autorun-spdk.conf 00:01:00.443 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.443 ++ SPDK_TEST_NVMF=1 00:01:00.443 ++ SPDK_TEST_NVME_CLI=1 00:01:00.443 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:00.443 ++ SPDK_RUN_UBSAN=1 00:01:00.443 ++ NET_TYPE=phy 00:01:00.443 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:00.443 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.443 ++ RUN_NIGHTLY=1 00:01:00.443 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:00.443 + [[ -n '' ]] 00:01:00.443 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:00.443 + for M in /var/spdk/build-*-manifest.txt 00:01:00.443 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:00.443 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:00.443 + for M in /var/spdk/build-*-manifest.txt 00:01:00.443 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:00.443 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:00.443 ++ uname 00:01:00.443 + [[ Linux == \L\i\n\u\x ]] 00:01:00.443 + sudo dmesg -T 00:01:00.443 + sudo dmesg --clear 00:01:00.703 + dmesg_pid=1883772 00:01:00.703 + [[ Fedora Linux == FreeBSD ]] 00:01:00.703 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.703 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:00.703 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:00.703 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:00.703 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:00.703 + [[ -x /usr/src/fio-static/fio ]] 00:01:00.703 + export FIO_BIN=/usr/src/fio-static/fio 00:01:00.703 + FIO_BIN=/usr/src/fio-static/fio 00:01:00.703 + sudo dmesg -Tw 00:01:00.703 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:00.703 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:00.703 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:00.703 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.703 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:00.703 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:00.703 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.703 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:00.703 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:00.703 Test configuration: 00:01:00.703 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.703 SPDK_TEST_NVMF=1 00:01:00.703 SPDK_TEST_NVME_CLI=1 00:01:00.703 SPDK_TEST_NVMF_NICS=mlx5 00:01:00.703 SPDK_RUN_UBSAN=1 00:01:00.703 NET_TYPE=phy 00:01:00.703 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:00.703 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.703 RUN_NIGHTLY=1 21:47:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:00.703 21:47:11 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:00.703 21:47:11 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:00.703 21:47:11 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:00.703 21:47:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.703 21:47:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.703 21:47:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.703 21:47:11 -- paths/export.sh@5 -- $ export PATH 00:01:00.703 21:47:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:00.703 21:47:11 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:00.703 21:47:11 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:00.703 21:47:11 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722023231.XXXXXX 00:01:00.703 21:47:11 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722023231.iKaDhh 00:01:00.703 21:47:11 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:00.703 21:47:11 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:01:00.703 21:47:11 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.703 21:47:11 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:00.703 21:47:11 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:00.704 21:47:11 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:00.704 21:47:11 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:00.704 21:47:11 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:00.704 21:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.704 21:47:11 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:00.704 21:47:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:00.704 21:47:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:00.704 21:47:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:00.704 21:47:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:00.704 Fri Jul 26 07:47:11 PM UTC 2024 00:01:00.704 21:47:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:00.704 LTS-60-gdbef7efac 00:01:00.704 21:47:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:00.704 21:47:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:00.704 21:47:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:00.704 21:47:11 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:00.704 21:47:11 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:00.704 21:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.704 ************************************ 00:01:00.704 START TEST ubsan 00:01:00.704 ************************************ 00:01:00.704 21:47:11 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:00.704 using ubsan 00:01:00.704 00:01:00.704 real 0m0.000s 00:01:00.704 user 0m0.000s 00:01:00.704 sys 0m0.000s 00:01:00.704 21:47:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:00.704 21:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.704 ************************************ 00:01:00.704 END TEST ubsan 00:01:00.704 ************************************ 00:01:00.704 21:47:11 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:00.704 21:47:11 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:00.704 21:47:11 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:00.704 21:47:11 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:00.704 21:47:11 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:00.704 21:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:00.704 ************************************ 00:01:00.704 START TEST build_native_dpdk 00:01:00.704 ************************************ 00:01:00.704 21:47:11 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:00.704 21:47:11 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:00.704 21:47:11 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:00.704 21:47:11 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:00.704 21:47:11 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:00.704 21:47:11 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:00.704 21:47:11 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:00.704 21:47:11 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:00.704 21:47:11 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:00.704 21:47:11 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:00.704 21:47:11 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:00.704 21:47:11 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:00.704 21:47:11 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:00.704 21:47:11 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:00.704 21:47:11 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:00.704 21:47:11 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.704 21:47:11 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:00.964 21:47:11 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:00.964 21:47:11 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:00.964 21:47:11 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:00.964 eeb0605f11 version: 23.11.0 00:01:00.964 238778122a doc: update release notes for 23.11 00:01:00.964 46aa6b3cfc doc: fix description of RSS features 00:01:00.964 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:00.964 7e421ae345 devtools: support skipping forbid rule check 00:01:00.964 21:47:11 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:00.964 21:47:11 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:00.964 21:47:11 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:00.964 21:47:11 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:00.964 21:47:11 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:00.964 21:47:11 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:00.964 21:47:11 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:00.964 21:47:11 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:00.964 21:47:11 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:00.964 21:47:11 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:00.964 21:47:11 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:00.964 21:47:11 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:00.964 21:47:11 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:00.964 21:47:11 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:00.964 21:47:11 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:00.964 21:47:11 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:00.964 21:47:11 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:00.964 21:47:11 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:00.964 21:47:11 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:00.964 21:47:11 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:00.964 21:47:11 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:00.964 21:47:11 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:00.964 21:47:11 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:00.964 21:47:11 -- scripts/common.sh@343 -- $ case "$op" in 00:01:00.964 21:47:11 -- scripts/common.sh@344 -- $ : 1 00:01:00.964 21:47:11 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:00.964 21:47:11 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:00.964 21:47:11 -- scripts/common.sh@364 -- $ decimal 23 00:01:00.964 21:47:11 -- scripts/common.sh@352 -- $ local d=23 00:01:00.964 21:47:11 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:00.964 21:47:11 -- scripts/common.sh@354 -- $ echo 23 00:01:00.964 21:47:11 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:00.964 21:47:11 -- scripts/common.sh@365 -- $ decimal 21 00:01:00.964 21:47:11 -- scripts/common.sh@352 -- $ local d=21 00:01:00.964 21:47:11 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:00.964 21:47:11 -- scripts/common.sh@354 -- $ echo 21 00:01:00.964 21:47:11 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:00.964 21:47:11 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:00.964 21:47:11 -- scripts/common.sh@366 -- $ return 1 00:01:00.964 21:47:11 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:00.964 patching file config/rte_config.h 00:01:00.964 Hunk #1 succeeded at 60 (offset 1 line). 00:01:00.964 21:47:11 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:00.964 21:47:11 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:00.964 21:47:11 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:00.964 21:47:11 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:00.964 21:47:11 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:00.964 21:47:11 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:00.964 21:47:11 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:00.964 21:47:11 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:00.964 21:47:11 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:00.964 21:47:11 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:00.964 21:47:11 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:00.964 21:47:11 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:00.964 21:47:11 -- scripts/common.sh@343 -- $ case "$op" in 00:01:00.964 21:47:11 -- scripts/common.sh@344 -- $ : 1 00:01:00.964 21:47:11 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:00.964 21:47:11 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:00.964 21:47:11 -- scripts/common.sh@364 -- $ decimal 23 00:01:00.964 21:47:11 -- scripts/common.sh@352 -- $ local d=23 00:01:00.964 21:47:11 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:00.964 21:47:11 -- scripts/common.sh@354 -- $ echo 23 00:01:00.964 21:47:11 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:00.964 21:47:11 -- scripts/common.sh@365 -- $ decimal 24 00:01:00.964 21:47:11 -- scripts/common.sh@352 -- $ local d=24 00:01:00.964 21:47:11 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:00.964 21:47:11 -- scripts/common.sh@354 -- $ echo 24 00:01:00.964 21:47:11 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:00.964 21:47:11 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:00.964 21:47:11 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:00.964 21:47:11 -- scripts/common.sh@367 -- $ return 0 00:01:00.964 21:47:11 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:00.964 patching file lib/pcapng/rte_pcapng.c 00:01:00.964 21:47:11 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:00.964 21:47:11 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:00.964 21:47:12 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:00.964 21:47:12 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:00.964 21:47:12 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:06.270 The Meson build system 00:01:06.270 Version: 1.3.1 00:01:06.270 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:06.270 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:01:06.270 Build type: native build 00:01:06.270 Program cat found: YES (/usr/bin/cat) 00:01:06.270 Project name: DPDK 00:01:06.270 Project version: 23.11.0 00:01:06.270 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:06.270 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:06.270 Host machine cpu family: x86_64 00:01:06.270 Host machine cpu: x86_64 00:01:06.270 Message: ## Building in Developer Mode ## 00:01:06.270 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:06.270 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:06.270 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:06.270 Program python3 found: YES (/usr/bin/python3) 00:01:06.270 Program cat found: YES (/usr/bin/cat) 00:01:06.270 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:06.270 Compiler for C supports arguments -march=native: YES 00:01:06.270 Checking for size of "void *" : 8 00:01:06.270 Checking for size of "void *" : 8 (cached) 00:01:06.270 Library m found: YES 00:01:06.270 Library numa found: YES 00:01:06.270 Has header "numaif.h" : YES 00:01:06.270 Library fdt found: NO 00:01:06.270 Library execinfo found: NO 00:01:06.270 Has header "execinfo.h" : YES 00:01:06.270 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:06.270 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:06.270 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:06.270 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:06.270 Run-time dependency openssl found: YES 3.0.9 00:01:06.270 Run-time dependency libpcap found: YES 1.10.4 00:01:06.270 Has header "pcap.h" with dependency libpcap: YES 00:01:06.270 Compiler for C supports arguments -Wcast-qual: YES 00:01:06.270 Compiler for C supports arguments -Wdeprecated: YES 00:01:06.270 Compiler for C supports arguments -Wformat: YES 00:01:06.270 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:06.270 Compiler for C supports arguments -Wformat-security: NO 00:01:06.270 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:06.270 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:06.270 Compiler for C supports arguments -Wnested-externs: YES 00:01:06.270 Compiler for C supports arguments -Wold-style-definition: YES 00:01:06.270 Compiler for C supports arguments -Wpointer-arith: YES 00:01:06.270 Compiler for C supports arguments -Wsign-compare: YES 00:01:06.270 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:06.270 Compiler for C supports arguments -Wundef: YES 00:01:06.270 Compiler for C supports arguments -Wwrite-strings: YES 00:01:06.270 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:06.270 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:06.270 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:06.270 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:06.270 Program objdump found: YES (/usr/bin/objdump) 00:01:06.270 Compiler for C supports arguments -mavx512f: YES 00:01:06.270 Checking if "AVX512 checking" compiles: YES 00:01:06.270 Fetching value of define "__SSE4_2__" : 1 00:01:06.270 Fetching value of define "__AES__" : 1 00:01:06.270 Fetching value of define "__AVX__" : 1 00:01:06.270 Fetching value of define "__AVX2__" : 1 00:01:06.270 Fetching value of define "__AVX512BW__" : 1 00:01:06.270 Fetching value of define "__AVX512CD__" : 1 00:01:06.270 Fetching value of define "__AVX512DQ__" : 1 00:01:06.270 Fetching value of define "__AVX512F__" : 1 00:01:06.270 Fetching value of define "__AVX512VL__" : 1 00:01:06.270 Fetching value of define "__PCLMUL__" : 1 00:01:06.270 Fetching value of define "__RDRND__" : 1 00:01:06.270 Fetching value of define "__RDSEED__" : 1 00:01:06.270 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:06.270 Fetching value of define "__znver1__" : (undefined) 00:01:06.270 Fetching value of define "__znver2__" : (undefined) 00:01:06.270 Fetching value of define "__znver3__" : (undefined) 00:01:06.270 Fetching value of define "__znver4__" : (undefined) 00:01:06.270 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:06.270 Message: lib/log: Defining dependency "log" 00:01:06.270 Message: lib/kvargs: Defining dependency "kvargs" 00:01:06.270 Message: lib/telemetry: Defining dependency "telemetry" 00:01:06.270 Checking for function "getentropy" : NO 00:01:06.270 Message: lib/eal: Defining dependency "eal" 00:01:06.270 Message: lib/ring: Defining dependency "ring" 00:01:06.270 Message: lib/rcu: Defining dependency "rcu" 00:01:06.270 Message: lib/mempool: Defining dependency "mempool" 00:01:06.270 Message: lib/mbuf: Defining dependency "mbuf" 00:01:06.270 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:06.270 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:06.270 Compiler for C supports arguments -mpclmul: YES 00:01:06.270 Compiler for C supports arguments -maes: YES 00:01:06.270 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.270 Compiler for C supports arguments -mavx512bw: YES 00:01:06.270 Compiler for C supports arguments -mavx512dq: YES 00:01:06.270 Compiler for C supports arguments -mavx512vl: YES 00:01:06.270 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:06.270 Compiler for C supports arguments -mavx2: YES 00:01:06.270 Compiler for C supports arguments -mavx: YES 00:01:06.270 Message: lib/net: Defining dependency "net" 00:01:06.270 Message: lib/meter: Defining dependency "meter" 00:01:06.270 Message: lib/ethdev: Defining dependency "ethdev" 00:01:06.270 Message: lib/pci: Defining dependency "pci" 00:01:06.270 Message: lib/cmdline: Defining dependency "cmdline" 00:01:06.270 Message: lib/metrics: Defining dependency "metrics" 00:01:06.270 Message: lib/hash: Defining dependency "hash" 00:01:06.270 Message: lib/timer: Defining dependency "timer" 00:01:06.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.270 Message: lib/acl: Defining dependency "acl" 00:01:06.270 Message: lib/bbdev: Defining dependency "bbdev" 00:01:06.270 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:06.270 Run-time dependency libelf found: YES 0.190 00:01:06.270 Message: lib/bpf: Defining dependency "bpf" 00:01:06.270 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:06.270 Message: lib/compressdev: Defining dependency "compressdev" 00:01:06.270 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:06.270 Message: lib/distributor: Defining dependency "distributor" 00:01:06.270 Message: lib/dmadev: Defining dependency "dmadev" 00:01:06.270 Message: lib/efd: Defining dependency "efd" 00:01:06.270 Message: lib/eventdev: Defining dependency "eventdev" 00:01:06.270 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:06.270 Message: lib/gpudev: Defining dependency "gpudev" 00:01:06.270 Message: lib/gro: Defining dependency "gro" 00:01:06.270 Message: lib/gso: Defining dependency "gso" 00:01:06.270 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:06.270 Message: lib/jobstats: Defining dependency "jobstats" 00:01:06.270 Message: lib/latencystats: Defining dependency "latencystats" 00:01:06.270 Message: lib/lpm: Defining dependency "lpm" 00:01:06.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:06.270 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:06.270 Message: lib/member: Defining dependency "member" 00:01:06.270 Message: lib/pcapng: Defining dependency "pcapng" 00:01:06.270 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:06.270 Message: lib/power: Defining dependency "power" 00:01:06.270 Message: lib/rawdev: Defining dependency "rawdev" 00:01:06.270 Message: lib/regexdev: Defining dependency "regexdev" 00:01:06.270 Message: lib/mldev: Defining dependency "mldev" 00:01:06.270 Message: lib/rib: Defining dependency "rib" 00:01:06.270 Message: lib/reorder: Defining dependency "reorder" 00:01:06.270 Message: lib/sched: Defining dependency "sched" 00:01:06.270 Message: lib/security: Defining dependency "security" 00:01:06.270 Message: lib/stack: Defining dependency "stack" 00:01:06.270 Has header "linux/userfaultfd.h" : YES 00:01:06.270 Has header "linux/vduse.h" : YES 00:01:06.270 Message: lib/vhost: Defining dependency "vhost" 00:01:06.270 Message: lib/ipsec: Defining dependency "ipsec" 00:01:06.270 Message: lib/pdcp: Defining dependency "pdcp" 00:01:06.270 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:06.270 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.270 Message: lib/fib: Defining dependency "fib" 00:01:06.270 Message: lib/port: Defining dependency "port" 00:01:06.270 Message: lib/pdump: Defining dependency "pdump" 00:01:06.270 Message: lib/table: Defining dependency "table" 00:01:06.270 Message: lib/pipeline: Defining dependency "pipeline" 00:01:06.270 Message: lib/graph: Defining dependency "graph" 00:01:06.270 Message: lib/node: Defining dependency "node" 00:01:06.270 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:06.838 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:06.838 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:06.838 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:06.838 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:06.838 Compiler for C supports arguments -Wno-unused-value: YES 00:01:06.838 Compiler for C supports arguments -Wno-format: YES 00:01:06.838 Compiler for C supports arguments -Wno-format-security: YES 00:01:06.838 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:06.838 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:06.838 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:06.838 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:06.838 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:06.838 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:06.838 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:06.838 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:06.838 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:06.838 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:06.839 Has header "sys/epoll.h" : YES 00:01:06.839 Program doxygen found: YES (/usr/bin/doxygen) 00:01:06.839 Configuring doxy-api-html.conf using configuration 00:01:06.839 Configuring doxy-api-man.conf using configuration 00:01:06.839 Program mandb found: YES (/usr/bin/mandb) 00:01:06.839 Program sphinx-build found: NO 00:01:06.839 Configuring rte_build_config.h using configuration 00:01:06.839 Message: 00:01:06.839 ================= 00:01:06.839 Applications Enabled 00:01:06.839 ================= 00:01:06.839 00:01:06.839 apps: 00:01:06.839 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:06.839 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:06.839 test-pmd, test-regex, test-sad, test-security-perf, 00:01:06.839 00:01:06.839 Message: 00:01:06.839 ================= 00:01:06.839 Libraries Enabled 00:01:06.839 ================= 00:01:06.839 00:01:06.839 libs: 00:01:06.839 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:06.839 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:06.839 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:06.839 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:06.839 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:06.839 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:06.839 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:06.839 00:01:06.839 00:01:06.839 Message: 00:01:06.839 =============== 00:01:06.839 Drivers Enabled 00:01:06.839 =============== 00:01:06.839 00:01:06.839 common: 00:01:06.839 00:01:06.839 bus: 00:01:06.839 pci, vdev, 00:01:06.839 mempool: 00:01:06.839 ring, 00:01:06.839 dma: 00:01:06.839 00:01:06.839 net: 00:01:06.839 i40e, 00:01:06.839 raw: 00:01:06.839 00:01:06.839 crypto: 00:01:06.839 00:01:06.839 compress: 00:01:06.839 00:01:06.839 regex: 00:01:06.839 00:01:06.839 ml: 00:01:06.839 00:01:06.839 vdpa: 00:01:06.839 00:01:06.839 event: 00:01:06.839 00:01:06.839 baseband: 00:01:06.839 00:01:06.839 gpu: 00:01:06.839 00:01:06.839 00:01:06.839 Message: 00:01:06.839 ================= 00:01:06.839 Content Skipped 00:01:06.839 ================= 00:01:06.839 00:01:06.839 apps: 00:01:06.839 00:01:06.839 libs: 00:01:06.839 00:01:06.839 drivers: 00:01:06.839 common/cpt: not in enabled drivers build config 00:01:06.839 common/dpaax: not in enabled drivers build config 00:01:06.839 common/iavf: not in enabled drivers build config 00:01:06.839 common/idpf: not in enabled drivers build config 00:01:06.839 common/mvep: not in enabled drivers build config 00:01:06.839 common/octeontx: not in enabled drivers build config 00:01:06.839 bus/auxiliary: not in enabled drivers build config 00:01:06.839 bus/cdx: not in enabled drivers build config 00:01:06.839 bus/dpaa: not in enabled drivers build config 00:01:06.839 bus/fslmc: not in enabled drivers build config 00:01:06.839 bus/ifpga: not in enabled drivers build config 00:01:06.839 bus/platform: not in enabled drivers build config 00:01:06.839 bus/vmbus: not in enabled drivers build config 00:01:06.839 common/cnxk: not in enabled drivers build config 00:01:06.839 common/mlx5: not in enabled drivers build config 00:01:06.839 common/nfp: not in enabled drivers build config 00:01:06.839 common/qat: not in enabled drivers build config 00:01:06.839 common/sfc_efx: not in enabled drivers build config 00:01:06.839 mempool/bucket: not in enabled drivers build config 00:01:06.839 mempool/cnxk: not in enabled drivers build config 00:01:06.839 mempool/dpaa: not in enabled drivers build config 00:01:06.839 mempool/dpaa2: not in enabled drivers build config 00:01:06.839 mempool/octeontx: not in enabled drivers build config 00:01:06.839 mempool/stack: not in enabled drivers build config 00:01:06.839 dma/cnxk: not in enabled drivers build config 00:01:06.839 dma/dpaa: not in enabled drivers build config 00:01:06.839 dma/dpaa2: not in enabled drivers build config 00:01:06.839 dma/hisilicon: not in enabled drivers build config 00:01:06.839 dma/idxd: not in enabled drivers build config 00:01:06.839 dma/ioat: not in enabled drivers build config 00:01:06.839 dma/skeleton: not in enabled drivers build config 00:01:06.839 net/af_packet: not in enabled drivers build config 00:01:06.839 net/af_xdp: not in enabled drivers build config 00:01:06.839 net/ark: not in enabled drivers build config 00:01:06.839 net/atlantic: not in enabled drivers build config 00:01:06.839 net/avp: not in enabled drivers build config 00:01:06.839 net/axgbe: not in enabled drivers build config 00:01:06.839 net/bnx2x: not in enabled drivers build config 00:01:06.839 net/bnxt: not in enabled drivers build config 00:01:06.839 net/bonding: not in enabled drivers build config 00:01:06.839 net/cnxk: not in enabled drivers build config 00:01:06.839 net/cpfl: not in enabled drivers build config 00:01:06.839 net/cxgbe: not in enabled drivers build config 00:01:06.839 net/dpaa: not in enabled drivers build config 00:01:06.839 net/dpaa2: not in enabled drivers build config 00:01:06.839 net/e1000: not in enabled drivers build config 00:01:06.839 net/ena: not in enabled drivers build config 00:01:06.839 net/enetc: not in enabled drivers build config 00:01:06.839 net/enetfec: not in enabled drivers build config 00:01:06.839 net/enic: not in enabled drivers build config 00:01:06.839 net/failsafe: not in enabled drivers build config 00:01:06.839 net/fm10k: not in enabled drivers build config 00:01:06.839 net/gve: not in enabled drivers build config 00:01:06.839 net/hinic: not in enabled drivers build config 00:01:06.839 net/hns3: not in enabled drivers build config 00:01:06.839 net/iavf: not in enabled drivers build config 00:01:06.839 net/ice: not in enabled drivers build config 00:01:06.839 net/idpf: not in enabled drivers build config 00:01:06.839 net/igc: not in enabled drivers build config 00:01:06.839 net/ionic: not in enabled drivers build config 00:01:06.839 net/ipn3ke: not in enabled drivers build config 00:01:06.839 net/ixgbe: not in enabled drivers build config 00:01:06.839 net/mana: not in enabled drivers build config 00:01:06.839 net/memif: not in enabled drivers build config 00:01:06.839 net/mlx4: not in enabled drivers build config 00:01:06.839 net/mlx5: not in enabled drivers build config 00:01:06.839 net/mvneta: not in enabled drivers build config 00:01:06.839 net/mvpp2: not in enabled drivers build config 00:01:06.839 net/netvsc: not in enabled drivers build config 00:01:06.839 net/nfb: not in enabled drivers build config 00:01:06.839 net/nfp: not in enabled drivers build config 00:01:06.839 net/ngbe: not in enabled drivers build config 00:01:06.839 net/null: not in enabled drivers build config 00:01:06.839 net/octeontx: not in enabled drivers build config 00:01:06.839 net/octeon_ep: not in enabled drivers build config 00:01:06.839 net/pcap: not in enabled drivers build config 00:01:06.839 net/pfe: not in enabled drivers build config 00:01:06.839 net/qede: not in enabled drivers build config 00:01:06.839 net/ring: not in enabled drivers build config 00:01:06.839 net/sfc: not in enabled drivers build config 00:01:06.839 net/softnic: not in enabled drivers build config 00:01:06.839 net/tap: not in enabled drivers build config 00:01:06.839 net/thunderx: not in enabled drivers build config 00:01:06.839 net/txgbe: not in enabled drivers build config 00:01:06.839 net/vdev_netvsc: not in enabled drivers build config 00:01:06.839 net/vhost: not in enabled drivers build config 00:01:06.839 net/virtio: not in enabled drivers build config 00:01:06.839 net/vmxnet3: not in enabled drivers build config 00:01:06.839 raw/cnxk_bphy: not in enabled drivers build config 00:01:06.839 raw/cnxk_gpio: not in enabled drivers build config 00:01:06.839 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:06.839 raw/ifpga: not in enabled drivers build config 00:01:06.839 raw/ntb: not in enabled drivers build config 00:01:06.839 raw/skeleton: not in enabled drivers build config 00:01:06.839 crypto/armv8: not in enabled drivers build config 00:01:06.839 crypto/bcmfs: not in enabled drivers build config 00:01:06.839 crypto/caam_jr: not in enabled drivers build config 00:01:06.839 crypto/ccp: not in enabled drivers build config 00:01:06.839 crypto/cnxk: not in enabled drivers build config 00:01:06.839 crypto/dpaa_sec: not in enabled drivers build config 00:01:06.839 crypto/dpaa2_sec: not in enabled drivers build config 00:01:06.839 crypto/ipsec_mb: not in enabled drivers build config 00:01:06.839 crypto/mlx5: not in enabled drivers build config 00:01:06.839 crypto/mvsam: not in enabled drivers build config 00:01:06.839 crypto/nitrox: not in enabled drivers build config 00:01:06.839 crypto/null: not in enabled drivers build config 00:01:06.839 crypto/octeontx: not in enabled drivers build config 00:01:06.839 crypto/openssl: not in enabled drivers build config 00:01:06.839 crypto/scheduler: not in enabled drivers build config 00:01:06.839 crypto/uadk: not in enabled drivers build config 00:01:06.839 crypto/virtio: not in enabled drivers build config 00:01:06.839 compress/isal: not in enabled drivers build config 00:01:06.839 compress/mlx5: not in enabled drivers build config 00:01:06.839 compress/octeontx: not in enabled drivers build config 00:01:06.839 compress/zlib: not in enabled drivers build config 00:01:06.839 regex/mlx5: not in enabled drivers build config 00:01:06.840 regex/cn9k: not in enabled drivers build config 00:01:06.840 ml/cnxk: not in enabled drivers build config 00:01:06.840 vdpa/ifc: not in enabled drivers build config 00:01:06.840 vdpa/mlx5: not in enabled drivers build config 00:01:06.840 vdpa/nfp: not in enabled drivers build config 00:01:06.840 vdpa/sfc: not in enabled drivers build config 00:01:06.840 event/cnxk: not in enabled drivers build config 00:01:06.840 event/dlb2: not in enabled drivers build config 00:01:06.840 event/dpaa: not in enabled drivers build config 00:01:06.840 event/dpaa2: not in enabled drivers build config 00:01:06.840 event/dsw: not in enabled drivers build config 00:01:06.840 event/opdl: not in enabled drivers build config 00:01:06.840 event/skeleton: not in enabled drivers build config 00:01:06.840 event/sw: not in enabled drivers build config 00:01:06.840 event/octeontx: not in enabled drivers build config 00:01:06.840 baseband/acc: not in enabled drivers build config 00:01:06.840 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:06.840 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:06.840 baseband/la12xx: not in enabled drivers build config 00:01:06.840 baseband/null: not in enabled drivers build config 00:01:06.840 baseband/turbo_sw: not in enabled drivers build config 00:01:06.840 gpu/cuda: not in enabled drivers build config 00:01:06.840 00:01:06.840 00:01:06.840 Build targets in project: 217 00:01:06.840 00:01:06.840 DPDK 23.11.0 00:01:06.840 00:01:06.840 User defined options 00:01:06.840 libdir : lib 00:01:06.840 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:06.840 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:06.840 c_link_args : 00:01:06.840 enable_docs : false 00:01:06.840 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:06.840 enable_kmods : false 00:01:06.840 machine : native 00:01:06.840 tests : false 00:01:06.840 00:01:06.840 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:06.840 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:06.840 21:47:17 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:01:06.840 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:07.110 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:07.110 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:07.110 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:07.110 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:07.110 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:07.110 [6/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:07.110 [7/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:07.110 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:07.110 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:07.110 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:07.110 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:07.110 [12/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:07.110 [13/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:07.369 [14/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:07.369 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:07.369 [16/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:07.369 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:07.369 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:07.369 [19/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:07.369 [20/707] Linking static target lib/librte_kvargs.a 00:01:07.369 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:07.369 [22/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:07.369 [23/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:07.369 [24/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:07.369 [25/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:07.369 [26/707] Linking static target lib/librte_pci.a 00:01:07.369 [27/707] Linking static target lib/librte_log.a 00:01:07.369 [28/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:07.369 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:07.369 [30/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:07.369 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:07.369 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:07.369 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:07.369 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:07.369 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:07.632 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:07.632 [37/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.632 [38/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.632 [39/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:07.632 [40/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:07.632 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:07.632 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:07.632 [43/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:07.633 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:07.633 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:07.892 [46/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:07.892 [47/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:07.892 [48/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:07.892 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:07.892 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:07.892 [51/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:07.892 [52/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:07.892 [53/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:07.892 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:07.892 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:07.892 [56/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:07.892 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:07.892 [58/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:07.892 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:07.892 [60/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:07.892 [61/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:07.892 [62/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:07.892 [63/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:07.892 [64/707] Linking static target lib/librte_meter.a 00:01:07.892 [65/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:07.892 [66/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:07.892 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:07.892 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:07.892 [69/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:07.892 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:07.892 [71/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:07.892 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:07.892 [73/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:07.892 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:07.892 [75/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:07.893 [76/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:07.893 [77/707] Linking static target lib/librte_ring.a 00:01:07.893 [78/707] Linking static target lib/librte_cmdline.a 00:01:07.893 [79/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:07.893 [80/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:07.893 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:07.893 [82/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:07.893 [83/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:07.893 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:07.893 [85/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:07.893 [86/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:07.893 [87/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:07.893 [88/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:07.893 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:07.893 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:07.893 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:07.893 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:07.893 [93/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:07.893 [94/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:07.893 [95/707] Linking static target lib/librte_metrics.a 00:01:07.893 [96/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:07.893 [97/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:07.893 [98/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:07.893 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:07.893 [100/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:07.893 [101/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:07.893 [102/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:07.893 [103/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:07.893 [104/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:08.155 [105/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:08.155 [106/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:08.155 [107/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:08.155 [108/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:08.155 [109/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:08.155 [110/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:08.155 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:08.155 [112/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:08.155 [113/707] Linking static target lib/librte_cfgfile.a 00:01:08.155 [114/707] Linking static target lib/librte_net.a 00:01:08.155 [115/707] Linking static target lib/librte_bitratestats.a 00:01:08.155 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:08.155 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:08.155 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:08.155 [119/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.155 [120/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:08.155 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:08.155 [122/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:08.155 [123/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:08.155 [124/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:08.155 [125/707] Linking target lib/librte_log.so.24.0 00:01:08.155 [126/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:08.155 [127/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:08.155 [128/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:08.155 [129/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.155 [130/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:08.155 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:08.155 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:08.420 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:08.420 [134/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:08.420 [135/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:08.420 [136/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.420 [137/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:08.420 [138/707] Linking static target lib/librte_timer.a 00:01:08.420 [139/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:08.420 [140/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:08.420 [141/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:08.420 [142/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:08.420 [143/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:08.420 [144/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:08.420 [145/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:08.420 [146/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:08.420 [147/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:08.420 [148/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:08.420 [149/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.420 [150/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:08.420 [151/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:08.420 [152/707] Linking target lib/librte_kvargs.so.24.0 00:01:08.420 [153/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:08.420 [154/707] Linking static target lib/librte_mempool.a 00:01:08.420 [155/707] Linking static target lib/librte_bbdev.a 00:01:08.420 [156/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:08.420 [157/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.420 [158/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:08.420 [159/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:08.686 [160/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:08.686 [161/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:08.686 [162/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:08.686 [163/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:08.686 [164/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:08.686 [165/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:08.686 [166/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:08.686 [167/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:08.686 [168/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.686 [169/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:08.686 [170/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:08.686 [171/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:08.686 [172/707] Linking static target lib/librte_jobstats.a 00:01:08.686 [173/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.686 [174/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:08.686 [175/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:08.686 [176/707] Linking static target lib/librte_compressdev.a 00:01:08.686 [177/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:08.686 [178/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:08.686 [179/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:08.686 [180/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:08.686 [181/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:08.686 [182/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:08.686 [183/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:08.686 [184/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:08.686 [185/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:08.686 [186/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:08.686 [187/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:08.686 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:08.946 [189/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:08.946 [190/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:08.946 [191/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:08.946 [192/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:08.946 [193/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:08.946 [194/707] Linking static target lib/librte_dispatcher.a 00:01:08.946 [195/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:08.946 [196/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:08.946 [197/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:08.946 [198/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:08.946 [199/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:08.946 [200/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:08.946 [201/707] Linking static target lib/librte_latencystats.a 00:01:08.946 [202/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:08.946 [203/707] Linking static target lib/librte_telemetry.a 00:01:08.946 [204/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:08.946 [205/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:08.946 [206/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:08.946 [207/707] Linking static target lib/librte_rcu.a 00:01:08.946 [208/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:08.946 [209/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:08.946 [210/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:08.946 [211/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:08.946 [212/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:08.946 [213/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:08.946 [214/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:08.946 [215/707] Linking static target lib/librte_gpudev.a 00:01:08.946 [216/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:08.946 [217/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:08.946 [218/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:08.946 [219/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.946 [220/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:08.946 [221/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:08.946 [222/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:08.946 [223/707] Linking static target lib/librte_stack.a 00:01:08.946 [224/707] Linking static target lib/librte_dmadev.a 00:01:08.946 [225/707] Linking static target lib/librte_eal.a 00:01:08.946 [226/707] Linking static target lib/librte_gro.a 00:01:08.946 [227/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:08.946 [228/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:08.946 [229/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:08.946 [230/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:08.946 [231/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:08.946 [232/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:08.947 [233/707] Linking static target lib/librte_gso.a 00:01:08.947 [234/707] Linking static target lib/librte_regexdev.a 00:01:08.947 [235/707] Linking static target lib/librte_distributor.a 00:01:08.947 [236/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:08.947 [237/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:08.947 [238/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:08.947 [239/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:09.211 [240/707] Linking static target lib/librte_rawdev.a 00:01:09.211 [241/707] Linking static target lib/librte_mbuf.a 00:01:09.211 [242/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:09.211 [243/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:09.211 [244/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:09.211 [245/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:09.211 [246/707] Linking static target lib/librte_mldev.a 00:01:09.211 [247/707] Linking static target lib/librte_power.a 00:01:09.211 [248/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:09.211 [249/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.211 [250/707] Linking static target lib/librte_ip_frag.a 00:01:09.211 [251/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:09.211 [252/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:09.211 [253/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:09.211 [254/707] Linking static target lib/librte_pcapng.a 00:01:09.211 [255/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:09.211 [256/707] Linking static target lib/librte_reorder.a 00:01:09.211 [257/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.211 [258/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:09.211 [259/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:09.211 [260/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:09.211 [261/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.211 [262/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:09.211 [263/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:09.211 [264/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:09.211 [265/707] Linking static target lib/librte_bpf.a 00:01:09.211 [266/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:09.211 [267/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.211 [268/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:09.211 [269/707] Linking static target lib/librte_security.a 00:01:09.211 [270/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:09.479 [271/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:09.479 [272/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:09.479 [273/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [274/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:09.479 [275/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [276/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:09.479 [277/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:09.479 [278/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [279/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:09.479 [280/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:09.479 [281/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:09.479 [282/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:09.479 [283/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [284/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [285/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:09.479 [286/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:09.479 [287/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:09.479 [288/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [289/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:09.479 [290/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.479 [291/707] Linking static target lib/librte_lpm.a 00:01:09.744 [292/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:09.744 [293/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:09.744 [294/707] Linking static target lib/librte_rib.a 00:01:09.744 [295/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [296/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [297/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:09.744 [298/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:09.744 [299/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [300/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:09.744 [301/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:09.744 [302/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [303/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:09.744 [304/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [305/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:09.744 [306/707] Linking target lib/librte_telemetry.so.24.0 00:01:09.744 [307/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:09.744 [308/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:09.744 [309/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:09.744 [310/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:09.744 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:09.744 [312/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:09.744 [313/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:09.744 [314/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:09.744 [315/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [316/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:09.744 [317/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.744 [318/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:09.744 [319/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:09.744 [320/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:09.744 [321/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.007 [322/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:10.007 [323/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:10.007 [324/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:10.007 [325/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:10.007 [326/707] Linking static target lib/librte_efd.a 00:01:10.007 [327/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:10.007 [328/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:10.007 [329/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:10.007 [330/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:10.007 [331/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:10.007 [332/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:10.007 [333/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:10.007 [334/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:10.007 [335/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:10.007 [336/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:10.007 [337/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:10.007 [338/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.007 [339/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.007 [340/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:10.007 [341/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:10.007 [342/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:10.007 [343/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:10.272 [344/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:10.272 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:10.272 [346/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:10.272 [347/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:10.272 [348/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:10.272 [349/707] Linking static target lib/librte_fib.a 00:01:10.272 [350/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:10.272 [351/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:10.272 [352/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.272 [353/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:10.272 [354/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:10.272 [355/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:10.272 [356/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:10.272 [357/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:10.272 [358/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.272 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:10.272 [360/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:10.272 [361/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:10.272 [362/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.272 [363/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:10.272 [364/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:10.272 [365/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:10.538 [366/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:10.538 [367/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:10.538 [368/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:10.538 [369/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.538 [370/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.538 [371/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:10.538 [372/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:10.538 [373/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:10.538 [374/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:10.538 [375/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.538 [376/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:10.538 [377/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:10.538 [378/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:10.538 [379/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:10.538 [380/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:10.538 [381/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:10.538 [382/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:10.539 [383/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:10.539 [384/707] Linking static target lib/librte_pdump.a 00:01:10.539 [385/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:10.539 [386/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:10.539 [387/707] Linking static target lib/librte_graph.a 00:01:10.539 [388/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:10.539 [389/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:10.802 [390/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:10.802 [391/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:10.802 [392/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:10.802 [393/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:10.802 [394/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:10.802 [395/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:10.802 [396/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:10.802 [397/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:10.802 [398/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:10.802 [399/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:10.802 [400/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:10.802 [401/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:10.802 [402/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:10.802 [403/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:10.802 [404/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:10.802 [405/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:10.802 [406/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:10.802 [407/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:10.802 [408/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:10.802 [409/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:10.802 [410/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:10.802 [411/707] Linking static target lib/librte_sched.a 00:01:10.802 [412/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:10.802 [413/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.802 [414/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:10.802 [415/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:10.802 [416/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:10.802 [417/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:10.802 [418/707] Linking static target drivers/librte_bus_vdev.a 00:01:10.802 [419/707] Linking static target lib/librte_table.a 00:01:10.802 [420/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:11.069 [421/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:11.069 [422/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:11.069 [423/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:11.069 [424/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:11.069 [425/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:11.069 [426/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:11.069 [427/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:11.069 [428/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:11.069 [429/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:11.069 [430/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.069 [431/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:11.069 [432/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:11.069 [433/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:11.069 [434/707] Linking static target lib/librte_cryptodev.a 00:01:11.069 [435/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:11.069 [436/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:11.069 [437/707] Linking static target drivers/librte_bus_pci.a 00:01:11.069 [438/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:11.069 [439/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:11.069 [440/707] Linking static target lib/librte_member.a 00:01:11.069 [441/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:11.069 [442/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:11.069 [443/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:11.069 [444/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:11.331 [445/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:11.331 [446/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:11.331 [447/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:11.331 [448/707] Linking static target lib/librte_ipsec.a 00:01:11.331 [449/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:11.331 [450/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:11.331 [451/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:11.331 [452/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:11.331 [453/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:11.331 [454/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:11.331 [455/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:11.331 [456/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.331 [457/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:11.331 [458/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:11.331 [459/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:11.331 [460/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:11.331 [461/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:11.331 [462/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:11.331 [463/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:11.331 [464/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.331 [465/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:11.331 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:11.332 [467/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:11.332 [468/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:11.332 [469/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:11.332 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:11.332 [471/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:11.332 [472/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:11.332 [473/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:11.591 [474/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:11.591 [475/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:11.591 [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:11.591 [477/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:11.591 [478/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.591 [479/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:11.591 [480/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:11.591 [481/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:11.591 [482/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:11.591 [483/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:11.591 [484/707] Linking static target lib/librte_node.a 00:01:11.591 [485/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.592 [486/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:11.592 [487/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:11.592 [488/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:11.592 [489/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.592 [490/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:11.592 [491/707] Linking static target drivers/librte_mempool_ring.a 00:01:11.592 [492/707] Linking static target lib/librte_pdcp.a 00:01:11.592 [493/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:11.592 [494/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:11.592 [495/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:11.592 [496/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:11.592 [497/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:11.592 [498/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.592 [499/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:11.592 [500/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:11.592 [501/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:11.592 [502/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:11.592 [503/707] Linking static target lib/librte_hash.a 00:01:11.592 [504/707] Linking static target lib/librte_port.a 00:01:11.592 [505/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:11.592 [506/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.592 [507/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:11.851 [508/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:11.851 [509/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:11.851 [510/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:11.851 [511/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:11.851 [512/707] Linking static target lib/acl/libavx2_tmp.a 00:01:11.851 [513/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:11.851 [514/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:11.851 [515/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.851 [516/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:11.851 [517/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:11.851 [518/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:11.851 [519/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:11.851 [520/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:11.851 [521/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:11.851 [522/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:11.851 [523/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:11.851 [524/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:11.851 [525/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:11.851 [526/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:11.851 [527/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:11.851 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:11.851 [529/707] Linking static target lib/librte_eventdev.a 00:01:11.851 [530/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.851 [531/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:11.851 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:11.851 [533/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:11.851 [534/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:11.851 [535/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:11.851 [536/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:11.851 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:11.851 [538/707] Linking static target lib/librte_acl.a 00:01:11.851 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:11.851 [540/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:12.110 [541/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:12.110 [542/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.110 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:12.110 [544/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:12.110 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:12.110 [546/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:12.110 [547/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:12.110 [548/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:12.110 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:12.110 [550/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:12.110 [551/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:12.110 [552/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:12.368 [553/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:12.368 [554/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:12.368 [555/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:12.368 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:12.368 [557/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:12.368 [558/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:12.368 [559/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:12.368 [560/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:12.368 [561/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.368 [562/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:12.368 [563/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.368 [564/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:12.368 [565/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:12.368 [566/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:12.368 [567/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.626 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:12.626 [569/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:12.626 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:12.884 [571/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:12.884 [572/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:12.884 [573/707] Linking static target lib/librte_ethdev.a 00:01:12.884 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:12.884 [575/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.142 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:13.142 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:13.400 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:13.400 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:13.659 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:14.226 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:14.226 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:14.226 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:14.484 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:14.485 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:14.485 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:14.485 [587/707] Linking static target drivers/librte_net_i40e.a 00:01:15.051 [588/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:15.310 [589/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.569 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.569 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:16.137 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:21.415 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.415 [594/707] Linking target lib/librte_eal.so.24.0 00:01:21.415 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:21.415 [596/707] Linking target lib/librte_pci.so.24.0 00:01:21.415 [597/707] Linking target lib/librte_cfgfile.so.24.0 00:01:21.415 [598/707] Linking target lib/librte_meter.so.24.0 00:01:21.415 [599/707] Linking target lib/librte_ring.so.24.0 00:01:21.415 [600/707] Linking target lib/librte_jobstats.so.24.0 00:01:21.415 [601/707] Linking target lib/librte_stack.so.24.0 00:01:21.415 [602/707] Linking target lib/librte_dmadev.so.24.0 00:01:21.415 [603/707] Linking target lib/librte_timer.so.24.0 00:01:21.415 [604/707] Linking target lib/librte_rawdev.so.24.0 00:01:21.415 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:01:21.415 [606/707] Linking target lib/librte_acl.so.24.0 00:01:21.673 [607/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:21.674 [608/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:21.674 [609/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:21.674 [610/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:21.674 [611/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:21.674 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:21.674 [613/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:21.674 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:01:21.674 [615/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.674 [616/707] Linking target lib/librte_rcu.so.24.0 00:01:21.674 [617/707] Linking target lib/librte_mempool.so.24.0 00:01:21.674 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:21.674 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:21.674 [620/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:21.983 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:01:21.983 [622/707] Linking target lib/librte_mbuf.so.24.0 00:01:21.983 [623/707] Linking target lib/librte_rib.so.24.0 00:01:21.983 [624/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:21.983 [625/707] Linking static target lib/librte_pipeline.a 00:01:21.983 [626/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:21.983 [627/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:21.983 [628/707] Linking target lib/librte_compressdev.so.24.0 00:01:21.983 [629/707] Linking target lib/librte_fib.so.24.0 00:01:21.983 [630/707] Linking target lib/librte_reorder.so.24.0 00:01:21.983 [631/707] Linking target lib/librte_bbdev.so.24.0 00:01:21.983 [632/707] Linking target lib/librte_gpudev.so.24.0 00:01:21.983 [633/707] Linking target lib/librte_distributor.so.24.0 00:01:21.983 [634/707] Linking target lib/librte_mldev.so.24.0 00:01:21.983 [635/707] Linking target lib/librte_net.so.24.0 00:01:21.983 [636/707] Linking target lib/librte_regexdev.so.24.0 00:01:21.983 [637/707] Linking target lib/librte_cryptodev.so.24.0 00:01:21.983 [638/707] Linking target lib/librte_sched.so.24.0 00:01:22.263 [639/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:22.263 [640/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:22.263 [641/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:22.263 [642/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:22.263 [643/707] Linking target lib/librte_cmdline.so.24.0 00:01:22.263 [644/707] Linking target lib/librte_hash.so.24.0 00:01:22.263 [645/707] Linking target lib/librte_security.so.24.0 00:01:22.263 [646/707] Linking target lib/librte_ethdev.so.24.0 00:01:22.263 [647/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:22.263 [648/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:22.522 [649/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:22.522 [650/707] Linking target lib/librte_lpm.so.24.0 00:01:22.522 [651/707] Linking target lib/librte_efd.so.24.0 00:01:22.522 [652/707] Linking target lib/librte_member.so.24.0 00:01:22.522 [653/707] Linking target lib/librte_ipsec.so.24.0 00:01:22.522 [654/707] Linking target lib/librte_pdcp.so.24.0 00:01:22.522 [655/707] Linking target lib/librte_metrics.so.24.0 00:01:22.522 [656/707] Linking target lib/librte_pcapng.so.24.0 00:01:22.522 [657/707] Linking target lib/librte_gro.so.24.0 00:01:22.522 [658/707] Linking target lib/librte_gso.so.24.0 00:01:22.522 [659/707] Linking target lib/librte_bpf.so.24.0 00:01:22.522 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:01:22.522 [661/707] Linking target lib/librte_power.so.24.0 00:01:22.522 [662/707] Linking target lib/librte_eventdev.so.24.0 00:01:22.522 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:01:22.522 [664/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:22.522 [665/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:22.522 [666/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:22.522 [667/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:22.522 [668/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:22.522 [669/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:22.522 [670/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:22.522 [671/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:22.522 [672/707] Linking static target lib/librte_vhost.a 00:01:22.781 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:01:22.781 [674/707] Linking target lib/librte_graph.so.24.0 00:01:22.781 [675/707] Linking target lib/librte_pdump.so.24.0 00:01:22.781 [676/707] Linking target lib/librte_bitratestats.so.24.0 00:01:22.781 [677/707] Linking target lib/librte_latencystats.so.24.0 00:01:22.781 [678/707] Linking target lib/librte_port.so.24.0 00:01:22.781 [679/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:22.781 [680/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:22.781 [681/707] Linking target lib/librte_node.so.24.0 00:01:22.781 [682/707] Linking target lib/librte_table.so.24.0 00:01:23.040 [683/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:23.040 [684/707] Linking target app/dpdk-proc-info 00:01:23.040 [685/707] Linking target app/dpdk-test-pipeline 00:01:23.040 [686/707] Linking target app/dpdk-test-acl 00:01:23.040 [687/707] Linking target app/dpdk-dumpcap 00:01:23.040 [688/707] Linking target app/dpdk-pdump 00:01:23.040 [689/707] Linking target app/dpdk-test-dma-perf 00:01:23.040 [690/707] Linking target app/dpdk-test-sad 00:01:23.040 [691/707] Linking target app/dpdk-test-fib 00:01:23.040 [692/707] Linking target app/dpdk-test-flow-perf 00:01:23.040 [693/707] Linking target app/dpdk-graph 00:01:23.040 [694/707] Linking target app/dpdk-test-regex 00:01:23.040 [695/707] Linking target app/dpdk-test-cmdline 00:01:23.040 [696/707] Linking target app/dpdk-test-bbdev 00:01:23.040 [697/707] Linking target app/dpdk-test-gpudev 00:01:23.040 [698/707] Linking target app/dpdk-test-compress-perf 00:01:23.040 [699/707] Linking target app/dpdk-test-security-perf 00:01:23.040 [700/707] Linking target app/dpdk-test-crypto-perf 00:01:23.040 [701/707] Linking target app/dpdk-test-mldev 00:01:23.040 [702/707] Linking target app/dpdk-test-eventdev 00:01:23.299 [703/707] Linking target app/dpdk-testpmd 00:01:24.680 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.680 [705/707] Linking target lib/librte_vhost.so.24.0 00:01:27.970 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.970 [707/707] Linking target lib/librte_pipeline.so.24.0 00:01:27.970 21:47:38 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:01:27.970 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:01:27.970 [0/1] Installing files. 00:01:27.970 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.970 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.971 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:27.972 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.973 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:27.974 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.975 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:27.976 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:27.976 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.976 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:27.977 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:27.977 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:27.977 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:27.977 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:01:27.977 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:27.977 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.240 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.241 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.242 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:28.243 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:01:28.244 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so.24 00:01:28.244 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_log.so 00:01:28.244 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:01:28.244 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:28.244 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:01:28.244 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:28.244 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:01:28.244 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:28.244 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:01:28.244 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:28.244 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:01:28.244 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:28.244 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:01:28.244 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:28.244 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:01:28.244 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:28.244 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.24 00:01:28.244 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:01:28.244 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:01:28.244 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:28.244 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:01:28.244 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:28.244 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:01:28.244 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:28.244 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:01:28.244 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:28.244 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:01:28.244 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:28.244 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:01:28.244 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:28.244 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:01:28.244 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:28.244 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:01:28.244 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:28.244 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:01:28.244 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:28.244 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:01:28.244 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:28.244 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:01:28.244 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:28.244 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:01:28.244 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:28.244 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:01:28.244 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:28.244 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:01:28.244 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:28.244 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:01:28.244 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:28.244 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:01:28.244 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:28.245 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:01:28.245 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:28.245 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:01:28.245 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:28.245 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:01:28.245 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:01:28.245 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:01:28.245 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:28.245 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:01:28.245 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:28.245 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:01:28.245 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:28.245 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:01:28.245 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:28.245 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:01:28.245 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:01:28.245 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:01:28.245 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:01:28.245 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:01:28.245 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:01:28.245 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:01:28.245 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:01:28.245 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:01:28.245 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:01:28.245 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:01:28.245 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:01:28.245 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:01:28.245 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:28.245 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:01:28.245 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:28.245 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:01:28.245 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:28.245 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.24 00:01:28.245 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:01:28.245 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:01:28.245 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:28.245 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.24 00:01:28.245 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:01:28.245 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:01:28.245 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:28.245 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:01:28.245 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:28.245 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:01:28.245 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mldev.so 00:01:28.246 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:01:28.246 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:28.246 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:01:28.246 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:28.246 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:01:28.246 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:28.246 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.24 00:01:28.246 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:01:28.246 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:01:28.246 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:28.246 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:01:28.246 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:28.246 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:01:28.246 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:28.246 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:01:28.246 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:01:28.246 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:01:28.246 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:28.246 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.24 00:01:28.246 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:01:28.246 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:01:28.246 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:28.246 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.24 00:01:28.246 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:01:28.246 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:01:28.246 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:28.246 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:01:28.246 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:28.246 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.24 00:01:28.246 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:01:28.246 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:01:28.246 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:01:28.246 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:01:28.246 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:01:28.246 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:01:28.246 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:01:28.246 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:01:28.246 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:01:28.246 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:01:28.505 21:47:39 -- common/autobuild_common.sh@192 -- $ uname -s 00:01:28.505 21:47:39 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:28.505 21:47:39 -- common/autobuild_common.sh@203 -- $ cat 00:01:28.505 21:47:39 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:28.505 00:01:28.505 real 0m27.599s 00:01:28.505 user 8m2.184s 00:01:28.505 sys 2m40.623s 00:01:28.505 21:47:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.505 21:47:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.505 ************************************ 00:01:28.505 END TEST build_native_dpdk 00:01:28.505 ************************************ 00:01:28.505 21:47:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:28.505 21:47:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:28.505 21:47:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:28.505 21:47:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:28.505 21:47:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:28.505 21:47:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:28.505 21:47:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:28.506 21:47:39 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:01:28.506 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:28.765 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:01:28.765 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:01:28.765 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:29.024 Using 'verbs' RDMA provider 00:01:44.484 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:56.699 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:56.699 Creating mk/config.mk...done. 00:01:56.699 Creating mk/cc.flags.mk...done. 00:01:56.699 Type 'make' to build. 00:01:56.699 21:48:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:56.699 21:48:07 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:56.699 21:48:07 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:56.699 21:48:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.699 ************************************ 00:01:56.699 START TEST make 00:01:56.699 ************************************ 00:01:56.699 21:48:07 -- common/autotest_common.sh@1104 -- $ make -j112 00:01:56.699 make[1]: Nothing to be done for 'all'. 00:02:06.724 CC lib/ut/ut.o 00:02:06.724 CC lib/log/log.o 00:02:06.724 CC lib/log/log_flags.o 00:02:06.724 CC lib/log/log_deprecated.o 00:02:06.724 CC lib/ut_mock/mock.o 00:02:06.724 LIB libspdk_ut_mock.a 00:02:06.724 LIB libspdk_ut.a 00:02:06.724 LIB libspdk_log.a 00:02:06.724 SO libspdk_ut.so.1.0 00:02:06.724 SO libspdk_ut_mock.so.5.0 00:02:06.724 SO libspdk_log.so.6.1 00:02:06.724 SYMLINK libspdk_ut.so 00:02:06.724 SYMLINK libspdk_log.so 00:02:06.724 SYMLINK libspdk_ut_mock.so 00:02:06.982 CXX lib/trace_parser/trace.o 00:02:06.982 CC lib/ioat/ioat.o 00:02:06.982 CC lib/util/base64.o 00:02:06.982 CC lib/dma/dma.o 00:02:06.982 CC lib/util/bit_array.o 00:02:06.982 CC lib/util/cpuset.o 00:02:06.982 CC lib/util/crc16.o 00:02:06.982 CC lib/util/crc32.o 00:02:06.982 CC lib/util/crc32c.o 00:02:06.982 CC lib/util/crc64.o 00:02:06.982 CC lib/util/crc32_ieee.o 00:02:06.982 CC lib/util/dif.o 00:02:06.982 CC lib/util/fd.o 00:02:06.982 CC lib/util/file.o 00:02:06.982 CC lib/util/hexlify.o 00:02:06.982 CC lib/util/iov.o 00:02:06.982 CC lib/util/math.o 00:02:06.982 CC lib/util/pipe.o 00:02:06.982 CC lib/util/strerror_tls.o 00:02:06.982 CC lib/util/uuid.o 00:02:06.982 CC lib/util/string.o 00:02:06.982 CC lib/util/xor.o 00:02:06.982 CC lib/util/fd_group.o 00:02:06.982 CC lib/util/zipf.o 00:02:06.982 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.982 CC lib/vfio_user/host/vfio_user.o 00:02:07.240 LIB libspdk_dma.a 00:02:07.240 SO libspdk_dma.so.3.0 00:02:07.240 LIB libspdk_ioat.a 00:02:07.240 SO libspdk_ioat.so.6.0 00:02:07.240 SYMLINK libspdk_dma.so 00:02:07.240 LIB libspdk_vfio_user.a 00:02:07.240 SYMLINK libspdk_ioat.so 00:02:07.240 SO libspdk_vfio_user.so.4.0 00:02:07.240 SYMLINK libspdk_vfio_user.so 00:02:07.499 LIB libspdk_util.a 00:02:07.499 SO libspdk_util.so.8.0 00:02:07.499 SYMLINK libspdk_util.so 00:02:07.499 LIB libspdk_trace_parser.a 00:02:07.757 SO libspdk_trace_parser.so.4.0 00:02:07.757 SYMLINK libspdk_trace_parser.so 00:02:07.757 CC lib/vmd/led.o 00:02:07.757 CC lib/vmd/vmd.o 00:02:07.757 CC lib/env_dpdk/env.o 00:02:07.757 CC lib/env_dpdk/memory.o 00:02:07.757 CC lib/env_dpdk/pci.o 00:02:07.757 CC lib/env_dpdk/threads.o 00:02:07.757 CC lib/env_dpdk/pci_ioat.o 00:02:07.757 CC lib/env_dpdk/init.o 00:02:07.757 CC lib/env_dpdk/pci_virtio.o 00:02:07.757 CC lib/env_dpdk/pci_idxd.o 00:02:07.757 CC lib/env_dpdk/pci_vmd.o 00:02:07.757 CC lib/env_dpdk/pci_event.o 00:02:07.757 CC lib/env_dpdk/sigbus_handler.o 00:02:07.757 CC lib/env_dpdk/pci_dpdk.o 00:02:07.757 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.757 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.757 CC lib/rdma/common.o 00:02:07.757 CC lib/rdma/rdma_verbs.o 00:02:07.757 CC lib/conf/conf.o 00:02:07.757 CC lib/json/json_parse.o 00:02:07.757 CC lib/idxd/idxd.o 00:02:07.757 CC lib/json/json_util.o 00:02:07.757 CC lib/idxd/idxd_user.o 00:02:07.757 CC lib/json/json_write.o 00:02:07.757 CC lib/idxd/idxd_kernel.o 00:02:08.016 LIB libspdk_conf.a 00:02:08.016 LIB libspdk_rdma.a 00:02:08.016 SO libspdk_conf.so.5.0 00:02:08.016 LIB libspdk_json.a 00:02:08.016 SO libspdk_rdma.so.5.0 00:02:08.016 SYMLINK libspdk_conf.so 00:02:08.274 SO libspdk_json.so.5.1 00:02:08.274 SYMLINK libspdk_rdma.so 00:02:08.274 SYMLINK libspdk_json.so 00:02:08.274 LIB libspdk_idxd.a 00:02:08.274 LIB libspdk_vmd.a 00:02:08.274 SO libspdk_idxd.so.11.0 00:02:08.274 SO libspdk_vmd.so.5.0 00:02:08.274 SYMLINK libspdk_vmd.so 00:02:08.274 SYMLINK libspdk_idxd.so 00:02:08.532 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.532 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.532 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.532 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.789 LIB libspdk_jsonrpc.a 00:02:08.789 SO libspdk_jsonrpc.so.5.1 00:02:08.789 SYMLINK libspdk_jsonrpc.so 00:02:08.789 LIB libspdk_env_dpdk.a 00:02:08.789 SO libspdk_env_dpdk.so.13.0 00:02:09.047 SYMLINK libspdk_env_dpdk.so 00:02:09.047 CC lib/rpc/rpc.o 00:02:09.305 LIB libspdk_rpc.a 00:02:09.305 SO libspdk_rpc.so.5.0 00:02:09.305 SYMLINK libspdk_rpc.so 00:02:09.564 CC lib/trace/trace.o 00:02:09.564 CC lib/trace/trace_flags.o 00:02:09.564 CC lib/trace/trace_rpc.o 00:02:09.564 CC lib/notify/notify.o 00:02:09.564 CC lib/notify/notify_rpc.o 00:02:09.564 CC lib/sock/sock.o 00:02:09.564 CC lib/sock/sock_rpc.o 00:02:09.822 LIB libspdk_notify.a 00:02:09.822 LIB libspdk_trace.a 00:02:09.822 SO libspdk_notify.so.5.0 00:02:09.822 SO libspdk_trace.so.9.0 00:02:09.822 SYMLINK libspdk_notify.so 00:02:09.822 SYMLINK libspdk_trace.so 00:02:09.822 LIB libspdk_sock.a 00:02:09.822 SO libspdk_sock.so.8.0 00:02:10.081 SYMLINK libspdk_sock.so 00:02:10.081 CC lib/thread/thread.o 00:02:10.081 CC lib/thread/iobuf.o 00:02:10.081 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:10.081 CC lib/nvme/nvme_ctrlr.o 00:02:10.081 CC lib/nvme/nvme_fabric.o 00:02:10.081 CC lib/nvme/nvme_ns_cmd.o 00:02:10.081 CC lib/nvme/nvme_ns.o 00:02:10.081 CC lib/nvme/nvme_pcie_common.o 00:02:10.081 CC lib/nvme/nvme_pcie.o 00:02:10.081 CC lib/nvme/nvme_qpair.o 00:02:10.081 CC lib/nvme/nvme.o 00:02:10.081 CC lib/nvme/nvme_transport.o 00:02:10.081 CC lib/nvme/nvme_quirks.o 00:02:10.081 CC lib/nvme/nvme_discovery.o 00:02:10.081 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:10.081 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:10.081 CC lib/nvme/nvme_tcp.o 00:02:10.081 CC lib/nvme/nvme_opal.o 00:02:10.081 CC lib/nvme/nvme_io_msg.o 00:02:10.081 CC lib/nvme/nvme_poll_group.o 00:02:10.081 CC lib/nvme/nvme_zns.o 00:02:10.081 CC lib/nvme/nvme_cuse.o 00:02:10.081 CC lib/nvme/nvme_vfio_user.o 00:02:10.339 CC lib/nvme/nvme_rdma.o 00:02:11.275 LIB libspdk_thread.a 00:02:11.275 SO libspdk_thread.so.9.0 00:02:11.275 SYMLINK libspdk_thread.so 00:02:11.533 CC lib/virtio/virtio.o 00:02:11.533 CC lib/virtio/virtio_vhost_user.o 00:02:11.533 CC lib/virtio/virtio_vfio_user.o 00:02:11.533 CC lib/virtio/virtio_pci.o 00:02:11.533 CC lib/accel/accel.o 00:02:11.533 CC lib/accel/accel_rpc.o 00:02:11.533 CC lib/accel/accel_sw.o 00:02:11.533 CC lib/blob/blobstore.o 00:02:11.533 CC lib/blob/request.o 00:02:11.533 CC lib/init/json_config.o 00:02:11.533 CC lib/blob/zeroes.o 00:02:11.533 CC lib/init/subsystem.o 00:02:11.533 CC lib/blob/blob_bs_dev.o 00:02:11.533 CC lib/init/subsystem_rpc.o 00:02:11.533 CC lib/init/rpc.o 00:02:11.791 LIB libspdk_nvme.a 00:02:11.791 LIB libspdk_init.a 00:02:11.791 LIB libspdk_virtio.a 00:02:11.791 SO libspdk_nvme.so.12.0 00:02:11.791 SO libspdk_init.so.4.0 00:02:11.791 SO libspdk_virtio.so.6.0 00:02:11.791 SYMLINK libspdk_init.so 00:02:11.791 SYMLINK libspdk_virtio.so 00:02:12.050 SYMLINK libspdk_nvme.so 00:02:12.050 CC lib/event/app.o 00:02:12.050 CC lib/event/reactor.o 00:02:12.050 CC lib/event/log_rpc.o 00:02:12.050 CC lib/event/app_rpc.o 00:02:12.050 CC lib/event/scheduler_static.o 00:02:12.308 LIB libspdk_accel.a 00:02:12.308 SO libspdk_accel.so.14.0 00:02:12.308 SYMLINK libspdk_accel.so 00:02:12.308 LIB libspdk_event.a 00:02:12.565 SO libspdk_event.so.12.0 00:02:12.565 SYMLINK libspdk_event.so 00:02:12.565 CC lib/bdev/bdev_rpc.o 00:02:12.565 CC lib/bdev/bdev.o 00:02:12.565 CC lib/bdev/bdev_zone.o 00:02:12.565 CC lib/bdev/part.o 00:02:12.565 CC lib/bdev/scsi_nvme.o 00:02:13.501 LIB libspdk_blob.a 00:02:13.501 SO libspdk_blob.so.10.1 00:02:13.501 SYMLINK libspdk_blob.so 00:02:13.759 CC lib/lvol/lvol.o 00:02:13.759 CC lib/blobfs/blobfs.o 00:02:13.759 CC lib/blobfs/tree.o 00:02:14.324 LIB libspdk_bdev.a 00:02:14.324 SO libspdk_bdev.so.14.0 00:02:14.324 LIB libspdk_blobfs.a 00:02:14.324 LIB libspdk_lvol.a 00:02:14.324 SO libspdk_lvol.so.9.1 00:02:14.324 SO libspdk_blobfs.so.9.0 00:02:14.324 SYMLINK libspdk_bdev.so 00:02:14.582 SYMLINK libspdk_lvol.so 00:02:14.582 SYMLINK libspdk_blobfs.so 00:02:14.582 CC lib/ublk/ublk.o 00:02:14.582 CC lib/ublk/ublk_rpc.o 00:02:14.582 CC lib/nbd/nbd_rpc.o 00:02:14.582 CC lib/nbd/nbd.o 00:02:14.582 CC lib/scsi/dev.o 00:02:14.582 CC lib/scsi/port.o 00:02:14.582 CC lib/scsi/lun.o 00:02:14.582 CC lib/scsi/scsi.o 00:02:14.582 CC lib/scsi/scsi_bdev.o 00:02:14.582 CC lib/scsi/scsi_pr.o 00:02:14.582 CC lib/nvmf/ctrlr.o 00:02:14.582 CC lib/scsi/scsi_rpc.o 00:02:14.582 CC lib/scsi/task.o 00:02:14.582 CC lib/nvmf/ctrlr_discovery.o 00:02:14.582 CC lib/nvmf/ctrlr_bdev.o 00:02:14.582 CC lib/nvmf/subsystem.o 00:02:14.582 CC lib/nvmf/nvmf.o 00:02:14.582 CC lib/nvmf/transport.o 00:02:14.582 CC lib/nvmf/nvmf_rpc.o 00:02:14.582 CC lib/nvmf/tcp.o 00:02:14.582 CC lib/nvmf/rdma.o 00:02:14.582 CC lib/ftl/ftl_core.o 00:02:14.582 CC lib/ftl/ftl_init.o 00:02:14.582 CC lib/ftl/ftl_debug.o 00:02:14.582 CC lib/ftl/ftl_layout.o 00:02:14.582 CC lib/ftl/ftl_io.o 00:02:14.582 CC lib/ftl/ftl_sb.o 00:02:14.840 CC lib/ftl/ftl_l2p.o 00:02:14.840 CC lib/ftl/ftl_l2p_flat.o 00:02:14.840 CC lib/ftl/ftl_nv_cache.o 00:02:14.840 CC lib/ftl/ftl_band.o 00:02:14.840 CC lib/ftl/ftl_band_ops.o 00:02:14.840 CC lib/ftl/ftl_writer.o 00:02:14.840 CC lib/ftl/ftl_rq.o 00:02:14.840 CC lib/ftl/ftl_reloc.o 00:02:14.840 CC lib/ftl/ftl_l2p_cache.o 00:02:14.840 CC lib/ftl/ftl_p2l.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:14.840 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:14.840 CC lib/ftl/utils/ftl_conf.o 00:02:14.840 CC lib/ftl/utils/ftl_mempool.o 00:02:14.840 CC lib/ftl/utils/ftl_md.o 00:02:14.840 CC lib/ftl/utils/ftl_bitmap.o 00:02:14.840 CC lib/ftl/utils/ftl_property.o 00:02:14.840 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:14.840 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:14.840 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:14.840 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:14.840 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:14.840 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:14.840 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:14.840 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:14.840 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:14.840 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:14.840 CC lib/ftl/base/ftl_base_dev.o 00:02:14.840 CC lib/ftl/base/ftl_base_bdev.o 00:02:14.840 CC lib/ftl/ftl_trace.o 00:02:15.098 LIB libspdk_nbd.a 00:02:15.098 SO libspdk_nbd.so.6.0 00:02:15.355 LIB libspdk_scsi.a 00:02:15.355 SYMLINK libspdk_nbd.so 00:02:15.355 SO libspdk_scsi.so.8.0 00:02:15.355 LIB libspdk_ublk.a 00:02:15.355 SO libspdk_ublk.so.2.0 00:02:15.355 SYMLINK libspdk_scsi.so 00:02:15.355 SYMLINK libspdk_ublk.so 00:02:15.613 LIB libspdk_ftl.a 00:02:15.613 CC lib/iscsi/conn.o 00:02:15.613 CC lib/iscsi/init_grp.o 00:02:15.613 CC lib/iscsi/iscsi.o 00:02:15.613 CC lib/iscsi/param.o 00:02:15.613 CC lib/iscsi/md5.o 00:02:15.613 CC lib/iscsi/portal_grp.o 00:02:15.613 CC lib/vhost/vhost_rpc.o 00:02:15.613 CC lib/iscsi/tgt_node.o 00:02:15.613 CC lib/vhost/vhost.o 00:02:15.613 CC lib/iscsi/iscsi_subsystem.o 00:02:15.613 CC lib/vhost/vhost_blk.o 00:02:15.613 CC lib/iscsi/iscsi_rpc.o 00:02:15.613 CC lib/vhost/vhost_scsi.o 00:02:15.613 CC lib/iscsi/task.o 00:02:15.613 CC lib/vhost/rte_vhost_user.o 00:02:15.613 SO libspdk_ftl.so.8.0 00:02:16.177 SYMLINK libspdk_ftl.so 00:02:16.177 LIB libspdk_nvmf.a 00:02:16.436 SO libspdk_nvmf.so.17.0 00:02:16.436 LIB libspdk_vhost.a 00:02:16.436 SO libspdk_vhost.so.7.1 00:02:16.436 SYMLINK libspdk_nvmf.so 00:02:16.436 SYMLINK libspdk_vhost.so 00:02:16.436 LIB libspdk_iscsi.a 00:02:16.694 SO libspdk_iscsi.so.7.0 00:02:16.694 SYMLINK libspdk_iscsi.so 00:02:17.260 CC module/env_dpdk/env_dpdk_rpc.o 00:02:17.260 CC module/accel/dsa/accel_dsa.o 00:02:17.260 CC module/accel/error/accel_error.o 00:02:17.260 CC module/accel/dsa/accel_dsa_rpc.o 00:02:17.260 CC module/accel/error/accel_error_rpc.o 00:02:17.260 CC module/accel/ioat/accel_ioat.o 00:02:17.260 CC module/scheduler/gscheduler/gscheduler.o 00:02:17.260 CC module/accel/ioat/accel_ioat_rpc.o 00:02:17.260 CC module/blob/bdev/blob_bdev.o 00:02:17.260 CC module/accel/iaa/accel_iaa.o 00:02:17.260 CC module/accel/iaa/accel_iaa_rpc.o 00:02:17.260 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:17.260 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:17.260 CC module/sock/posix/posix.o 00:02:17.261 LIB libspdk_env_dpdk_rpc.a 00:02:17.261 SO libspdk_env_dpdk_rpc.so.5.0 00:02:17.518 SYMLINK libspdk_env_dpdk_rpc.so 00:02:17.518 LIB libspdk_scheduler_gscheduler.a 00:02:17.518 LIB libspdk_accel_ioat.a 00:02:17.518 LIB libspdk_accel_error.a 00:02:17.518 LIB libspdk_scheduler_dpdk_governor.a 00:02:17.518 SO libspdk_scheduler_gscheduler.so.3.0 00:02:17.518 SO libspdk_accel_ioat.so.5.0 00:02:17.518 LIB libspdk_accel_iaa.a 00:02:17.518 LIB libspdk_accel_dsa.a 00:02:17.518 LIB libspdk_scheduler_dynamic.a 00:02:17.518 SO libspdk_accel_error.so.1.0 00:02:17.518 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:17.518 LIB libspdk_blob_bdev.a 00:02:17.518 SO libspdk_accel_iaa.so.2.0 00:02:17.518 SO libspdk_accel_dsa.so.4.0 00:02:17.518 SYMLINK libspdk_scheduler_gscheduler.so 00:02:17.518 SYMLINK libspdk_accel_ioat.so 00:02:17.518 SO libspdk_scheduler_dynamic.so.3.0 00:02:17.518 SO libspdk_blob_bdev.so.10.1 00:02:17.518 SYMLINK libspdk_accel_error.so 00:02:17.518 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:17.518 SYMLINK libspdk_accel_dsa.so 00:02:17.518 SYMLINK libspdk_accel_iaa.so 00:02:17.518 SYMLINK libspdk_scheduler_dynamic.so 00:02:17.518 SYMLINK libspdk_blob_bdev.so 00:02:17.776 LIB libspdk_sock_posix.a 00:02:17.776 SO libspdk_sock_posix.so.5.0 00:02:18.034 CC module/bdev/delay/vbdev_delay.o 00:02:18.034 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.034 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.034 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.034 CC module/bdev/malloc/bdev_malloc.o 00:02:18.034 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.034 CC module/bdev/aio/bdev_aio.o 00:02:18.034 CC module/bdev/error/vbdev_error.o 00:02:18.034 CC module/bdev/gpt/gpt.o 00:02:18.034 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.034 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.034 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.034 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.034 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.034 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.034 CC module/bdev/null/bdev_null.o 00:02:18.034 CC module/bdev/nvme/nvme_rpc.o 00:02:18.034 CC module/bdev/nvme/bdev_nvme.o 00:02:18.034 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.034 CC module/bdev/nvme/vbdev_opal.o 00:02:18.034 CC module/bdev/null/bdev_null_rpc.o 00:02:18.034 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.034 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.034 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.034 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:18.034 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:18.034 CC module/bdev/ftl/bdev_ftl.o 00:02:18.034 CC module/bdev/raid/bdev_raid.o 00:02:18.034 CC module/bdev/raid/bdev_raid_rpc.o 00:02:18.034 CC module/bdev/raid/bdev_raid_sb.o 00:02:18.034 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.034 CC module/bdev/raid/raid0.o 00:02:18.034 CC module/bdev/split/vbdev_split.o 00:02:18.034 CC module/bdev/raid/raid1.o 00:02:18.034 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.034 CC module/bdev/iscsi/bdev_iscsi.o 00:02:18.034 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:18.034 CC module/bdev/raid/concat.o 00:02:18.034 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:18.034 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:18.034 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.034 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.034 SYMLINK libspdk_sock_posix.so 00:02:18.034 LIB libspdk_blobfs_bdev.a 00:02:18.293 SO libspdk_blobfs_bdev.so.5.0 00:02:18.293 LIB libspdk_bdev_gpt.a 00:02:18.293 LIB libspdk_bdev_error.a 00:02:18.293 LIB libspdk_bdev_split.a 00:02:18.293 LIB libspdk_bdev_null.a 00:02:18.293 SO libspdk_bdev_gpt.so.5.0 00:02:18.293 SO libspdk_bdev_error.so.5.0 00:02:18.293 LIB libspdk_bdev_aio.a 00:02:18.293 LIB libspdk_bdev_ftl.a 00:02:18.293 LIB libspdk_bdev_passthru.a 00:02:18.293 LIB libspdk_bdev_malloc.a 00:02:18.293 SO libspdk_bdev_split.so.5.0 00:02:18.293 SYMLINK libspdk_blobfs_bdev.so 00:02:18.293 LIB libspdk_bdev_zone_block.a 00:02:18.293 SO libspdk_bdev_null.so.5.0 00:02:18.293 LIB libspdk_bdev_delay.a 00:02:18.293 SO libspdk_bdev_aio.so.5.0 00:02:18.293 SO libspdk_bdev_ftl.so.5.0 00:02:18.293 SO libspdk_bdev_malloc.so.5.0 00:02:18.293 SYMLINK libspdk_bdev_gpt.so 00:02:18.293 SO libspdk_bdev_passthru.so.5.0 00:02:18.293 LIB libspdk_bdev_iscsi.a 00:02:18.293 SO libspdk_bdev_zone_block.so.5.0 00:02:18.293 SYMLINK libspdk_bdev_error.so 00:02:18.293 SO libspdk_bdev_delay.so.5.0 00:02:18.293 SYMLINK libspdk_bdev_split.so 00:02:18.293 SO libspdk_bdev_iscsi.so.5.0 00:02:18.293 SYMLINK libspdk_bdev_null.so 00:02:18.293 SYMLINK libspdk_bdev_ftl.so 00:02:18.293 LIB libspdk_bdev_lvol.a 00:02:18.293 SYMLINK libspdk_bdev_aio.so 00:02:18.293 SYMLINK libspdk_bdev_malloc.so 00:02:18.293 SYMLINK libspdk_bdev_passthru.so 00:02:18.293 SYMLINK libspdk_bdev_zone_block.so 00:02:18.293 LIB libspdk_bdev_virtio.a 00:02:18.293 SYMLINK libspdk_bdev_iscsi.so 00:02:18.293 SYMLINK libspdk_bdev_delay.so 00:02:18.293 SO libspdk_bdev_lvol.so.5.0 00:02:18.592 SO libspdk_bdev_virtio.so.5.0 00:02:18.592 SYMLINK libspdk_bdev_lvol.so 00:02:18.592 SYMLINK libspdk_bdev_virtio.so 00:02:18.592 LIB libspdk_bdev_raid.a 00:02:18.592 SO libspdk_bdev_raid.so.5.0 00:02:18.863 SYMLINK libspdk_bdev_raid.so 00:02:19.431 LIB libspdk_bdev_nvme.a 00:02:19.431 SO libspdk_bdev_nvme.so.6.0 00:02:19.689 SYMLINK libspdk_bdev_nvme.so 00:02:20.256 CC module/event/subsystems/sock/sock.o 00:02:20.256 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:20.256 CC module/event/subsystems/scheduler/scheduler.o 00:02:20.256 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:20.256 CC module/event/subsystems/iobuf/iobuf.o 00:02:20.256 CC module/event/subsystems/vmd/vmd.o 00:02:20.256 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:20.256 LIB libspdk_event_sock.a 00:02:20.256 LIB libspdk_event_vhost_blk.a 00:02:20.256 SO libspdk_event_sock.so.4.0 00:02:20.256 LIB libspdk_event_scheduler.a 00:02:20.256 LIB libspdk_event_vmd.a 00:02:20.256 LIB libspdk_event_iobuf.a 00:02:20.256 SO libspdk_event_vhost_blk.so.2.0 00:02:20.256 SO libspdk_event_iobuf.so.2.0 00:02:20.256 SO libspdk_event_scheduler.so.3.0 00:02:20.256 SO libspdk_event_vmd.so.5.0 00:02:20.256 SYMLINK libspdk_event_sock.so 00:02:20.256 SYMLINK libspdk_event_vhost_blk.so 00:02:20.515 SYMLINK libspdk_event_iobuf.so 00:02:20.515 SYMLINK libspdk_event_vmd.so 00:02:20.515 SYMLINK libspdk_event_scheduler.so 00:02:20.515 CC module/event/subsystems/accel/accel.o 00:02:20.774 LIB libspdk_event_accel.a 00:02:20.774 SO libspdk_event_accel.so.5.0 00:02:20.774 SYMLINK libspdk_event_accel.so 00:02:21.033 CC module/event/subsystems/bdev/bdev.o 00:02:21.291 LIB libspdk_event_bdev.a 00:02:21.291 SO libspdk_event_bdev.so.5.0 00:02:21.291 SYMLINK libspdk_event_bdev.so 00:02:21.551 CC module/event/subsystems/nbd/nbd.o 00:02:21.551 CC module/event/subsystems/scsi/scsi.o 00:02:21.551 CC module/event/subsystems/ublk/ublk.o 00:02:21.551 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:21.551 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:21.810 LIB libspdk_event_nbd.a 00:02:21.810 LIB libspdk_event_ublk.a 00:02:21.810 LIB libspdk_event_scsi.a 00:02:21.810 SO libspdk_event_nbd.so.5.0 00:02:21.810 SO libspdk_event_ublk.so.2.0 00:02:21.810 SO libspdk_event_scsi.so.5.0 00:02:21.810 LIB libspdk_event_nvmf.a 00:02:21.810 SYMLINK libspdk_event_nbd.so 00:02:21.810 SO libspdk_event_nvmf.so.5.0 00:02:21.810 SYMLINK libspdk_event_ublk.so 00:02:21.810 SYMLINK libspdk_event_scsi.so 00:02:22.069 SYMLINK libspdk_event_nvmf.so 00:02:22.069 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:22.069 CC module/event/subsystems/iscsi/iscsi.o 00:02:22.327 LIB libspdk_event_vhost_scsi.a 00:02:22.327 LIB libspdk_event_iscsi.a 00:02:22.327 SO libspdk_event_vhost_scsi.so.2.0 00:02:22.327 SO libspdk_event_iscsi.so.5.0 00:02:22.327 SYMLINK libspdk_event_vhost_scsi.so 00:02:22.327 SYMLINK libspdk_event_iscsi.so 00:02:22.585 SO libspdk.so.5.0 00:02:22.585 SYMLINK libspdk.so 00:02:22.845 CC app/spdk_nvme_discover/discovery_aer.o 00:02:22.845 CXX app/trace/trace.o 00:02:22.845 CC app/spdk_nvme_perf/perf.o 00:02:22.845 CC app/spdk_lspci/spdk_lspci.o 00:02:22.845 CC app/trace_record/trace_record.o 00:02:22.845 TEST_HEADER include/spdk/accel.h 00:02:22.845 TEST_HEADER include/spdk/accel_module.h 00:02:22.845 TEST_HEADER include/spdk/assert.h 00:02:22.845 TEST_HEADER include/spdk/barrier.h 00:02:22.845 CC app/spdk_top/spdk_top.o 00:02:22.845 TEST_HEADER include/spdk/base64.h 00:02:22.845 CC app/spdk_nvme_identify/identify.o 00:02:22.845 TEST_HEADER include/spdk/bdev.h 00:02:22.845 CC test/rpc_client/rpc_client_test.o 00:02:22.845 TEST_HEADER include/spdk/bdev_module.h 00:02:22.845 TEST_HEADER include/spdk/bit_array.h 00:02:22.845 TEST_HEADER include/spdk/bdev_zone.h 00:02:22.845 TEST_HEADER include/spdk/bit_pool.h 00:02:22.845 TEST_HEADER include/spdk/blob_bdev.h 00:02:22.845 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:22.845 TEST_HEADER include/spdk/blobfs.h 00:02:22.845 TEST_HEADER include/spdk/blob.h 00:02:22.845 TEST_HEADER include/spdk/conf.h 00:02:22.845 TEST_HEADER include/spdk/config.h 00:02:22.845 TEST_HEADER include/spdk/cpuset.h 00:02:22.845 TEST_HEADER include/spdk/crc16.h 00:02:22.845 TEST_HEADER include/spdk/crc32.h 00:02:22.845 TEST_HEADER include/spdk/crc64.h 00:02:22.845 TEST_HEADER include/spdk/dif.h 00:02:22.845 TEST_HEADER include/spdk/dma.h 00:02:22.845 TEST_HEADER include/spdk/endian.h 00:02:22.845 TEST_HEADER include/spdk/env_dpdk.h 00:02:22.845 TEST_HEADER include/spdk/env.h 00:02:22.845 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:22.845 TEST_HEADER include/spdk/event.h 00:02:22.845 TEST_HEADER include/spdk/fd_group.h 00:02:22.845 TEST_HEADER include/spdk/fd.h 00:02:22.845 TEST_HEADER include/spdk/ftl.h 00:02:22.845 TEST_HEADER include/spdk/file.h 00:02:22.845 TEST_HEADER include/spdk/gpt_spec.h 00:02:22.845 TEST_HEADER include/spdk/hexlify.h 00:02:22.845 TEST_HEADER include/spdk/histogram_data.h 00:02:22.846 TEST_HEADER include/spdk/idxd.h 00:02:22.846 TEST_HEADER include/spdk/idxd_spec.h 00:02:22.846 CC app/spdk_dd/spdk_dd.o 00:02:22.846 TEST_HEADER include/spdk/init.h 00:02:22.846 TEST_HEADER include/spdk/ioat.h 00:02:22.846 TEST_HEADER include/spdk/ioat_spec.h 00:02:22.846 TEST_HEADER include/spdk/iscsi_spec.h 00:02:22.846 TEST_HEADER include/spdk/json.h 00:02:22.846 TEST_HEADER include/spdk/jsonrpc.h 00:02:22.846 TEST_HEADER include/spdk/likely.h 00:02:22.846 TEST_HEADER include/spdk/log.h 00:02:22.846 TEST_HEADER include/spdk/lvol.h 00:02:22.846 TEST_HEADER include/spdk/memory.h 00:02:22.846 TEST_HEADER include/spdk/mmio.h 00:02:22.846 TEST_HEADER include/spdk/nbd.h 00:02:22.846 TEST_HEADER include/spdk/notify.h 00:02:22.846 TEST_HEADER include/spdk/nvme.h 00:02:22.846 CC app/nvmf_tgt/nvmf_main.o 00:02:22.846 TEST_HEADER include/spdk/nvme_intel.h 00:02:22.846 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:22.846 TEST_HEADER include/spdk/nvme_spec.h 00:02:22.846 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:22.846 CC app/iscsi_tgt/iscsi_tgt.o 00:02:22.846 CC app/vhost/vhost.o 00:02:22.846 TEST_HEADER include/spdk/nvme_zns.h 00:02:22.846 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:22.846 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:22.846 TEST_HEADER include/spdk/nvmf.h 00:02:22.846 TEST_HEADER include/spdk/nvmf_transport.h 00:02:22.846 TEST_HEADER include/spdk/nvmf_spec.h 00:02:22.846 TEST_HEADER include/spdk/opal.h 00:02:22.846 TEST_HEADER include/spdk/opal_spec.h 00:02:22.846 TEST_HEADER include/spdk/pci_ids.h 00:02:22.846 TEST_HEADER include/spdk/queue.h 00:02:22.846 TEST_HEADER include/spdk/pipe.h 00:02:22.846 TEST_HEADER include/spdk/reduce.h 00:02:22.846 TEST_HEADER include/spdk/scheduler.h 00:02:22.846 TEST_HEADER include/spdk/rpc.h 00:02:22.846 TEST_HEADER include/spdk/scsi.h 00:02:22.846 TEST_HEADER include/spdk/scsi_spec.h 00:02:22.846 TEST_HEADER include/spdk/sock.h 00:02:22.846 TEST_HEADER include/spdk/string.h 00:02:22.846 TEST_HEADER include/spdk/stdinc.h 00:02:22.846 TEST_HEADER include/spdk/thread.h 00:02:22.846 TEST_HEADER include/spdk/trace_parser.h 00:02:22.846 TEST_HEADER include/spdk/trace.h 00:02:22.846 TEST_HEADER include/spdk/tree.h 00:02:22.846 TEST_HEADER include/spdk/ublk.h 00:02:22.846 TEST_HEADER include/spdk/util.h 00:02:22.846 TEST_HEADER include/spdk/uuid.h 00:02:22.846 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:22.846 TEST_HEADER include/spdk/version.h 00:02:22.846 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:22.846 TEST_HEADER include/spdk/vhost.h 00:02:22.846 TEST_HEADER include/spdk/vmd.h 00:02:22.846 TEST_HEADER include/spdk/xor.h 00:02:22.846 CXX test/cpp_headers/accel.o 00:02:22.846 TEST_HEADER include/spdk/zipf.h 00:02:22.846 CXX test/cpp_headers/accel_module.o 00:02:22.846 CXX test/cpp_headers/assert.o 00:02:22.846 CXX test/cpp_headers/barrier.o 00:02:22.846 CXX test/cpp_headers/base64.o 00:02:22.846 CC app/spdk_tgt/spdk_tgt.o 00:02:22.846 CXX test/cpp_headers/bdev.o 00:02:22.846 CXX test/cpp_headers/bdev_zone.o 00:02:22.846 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.846 CXX test/cpp_headers/bdev_module.o 00:02:22.846 CXX test/cpp_headers/bit_array.o 00:02:22.846 CXX test/cpp_headers/bit_pool.o 00:02:22.846 CC examples/ioat/verify/verify.o 00:02:22.846 CC examples/nvme/arbitration/arbitration.o 00:02:22.846 CXX test/cpp_headers/blob_bdev.o 00:02:22.846 CC examples/nvme/hotplug/hotplug.o 00:02:22.846 CC examples/nvme/reconnect/reconnect.o 00:02:22.846 CXX test/cpp_headers/blobfs.o 00:02:22.846 CXX test/cpp_headers/blobfs_bdev.o 00:02:22.846 CC examples/ioat/perf/perf.o 00:02:22.846 CXX test/cpp_headers/blob.o 00:02:22.846 CC examples/nvme/hello_world/hello_world.o 00:02:22.846 CXX test/cpp_headers/conf.o 00:02:22.846 CC examples/nvme/abort/abort.o 00:02:22.846 CXX test/cpp_headers/config.o 00:02:22.846 CXX test/cpp_headers/cpuset.o 00:02:22.846 CXX test/cpp_headers/crc16.o 00:02:22.846 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.846 CXX test/cpp_headers/crc32.o 00:02:22.846 CXX test/cpp_headers/crc64.o 00:02:22.846 CXX test/cpp_headers/dma.o 00:02:22.846 CXX test/cpp_headers/dif.o 00:02:22.846 CXX test/cpp_headers/endian.o 00:02:22.846 CXX test/cpp_headers/env_dpdk.o 00:02:22.846 CC examples/accel/perf/accel_perf.o 00:02:22.846 CC examples/vmd/led/led.o 00:02:22.846 CXX test/cpp_headers/env.o 00:02:22.846 CXX test/cpp_headers/event.o 00:02:22.846 CXX test/cpp_headers/fd_group.o 00:02:22.846 CC app/fio/nvme/fio_plugin.o 00:02:22.846 CC examples/util/zipf/zipf.o 00:02:22.846 CXX test/cpp_headers/fd.o 00:02:22.846 CXX test/cpp_headers/ftl.o 00:02:22.846 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.846 CXX test/cpp_headers/file.o 00:02:22.846 CXX test/cpp_headers/gpt_spec.o 00:02:23.117 CC test/nvme/startup/startup.o 00:02:23.117 CC test/nvme/aer/aer.o 00:02:23.117 CXX test/cpp_headers/hexlify.o 00:02:23.117 CC examples/vmd/lsvmd/lsvmd.o 00:02:23.117 CXX test/cpp_headers/histogram_data.o 00:02:23.117 CXX test/cpp_headers/idxd.o 00:02:23.117 CXX test/cpp_headers/idxd_spec.o 00:02:23.118 CC test/event/event_perf/event_perf.o 00:02:23.118 CC test/nvme/e2edp/nvme_dp.o 00:02:23.118 CC test/event/reactor/reactor.o 00:02:23.118 CXX test/cpp_headers/init.o 00:02:23.118 CC examples/idxd/perf/perf.o 00:02:23.118 CXX test/cpp_headers/ioat.o 00:02:23.118 CC test/app/histogram_perf/histogram_perf.o 00:02:23.118 CC test/nvme/reset/reset.o 00:02:23.118 CC test/app/jsoncat/jsoncat.o 00:02:23.118 CC test/nvme/fdp/fdp.o 00:02:23.118 CC test/nvme/sgl/sgl.o 00:02:23.118 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:23.118 CC test/nvme/compliance/nvme_compliance.o 00:02:23.118 CC test/event/reactor_perf/reactor_perf.o 00:02:23.118 CC examples/sock/hello_world/hello_sock.o 00:02:23.118 CC test/nvme/err_injection/err_injection.o 00:02:23.118 CC test/nvme/reserve/reserve.o 00:02:23.118 CC test/thread/poller_perf/poller_perf.o 00:02:23.118 CC test/app/stub/stub.o 00:02:23.118 CC test/nvme/overhead/overhead.o 00:02:23.118 CC test/nvme/fused_ordering/fused_ordering.o 00:02:23.118 CC test/nvme/simple_copy/simple_copy.o 00:02:23.118 CC test/nvme/boot_partition/boot_partition.o 00:02:23.118 CC test/env/vtophys/vtophys.o 00:02:23.118 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.118 CC test/env/memory/memory_ut.o 00:02:23.118 CC app/fio/bdev/fio_plugin.o 00:02:23.118 CC test/nvme/cuse/cuse.o 00:02:23.118 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.118 CC test/nvme/connect_stress/connect_stress.o 00:02:23.118 CXX test/cpp_headers/ioat_spec.o 00:02:23.118 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:23.118 CC test/env/pci/pci_ut.o 00:02:23.118 CC examples/thread/thread/thread_ex.o 00:02:23.118 CC test/event/app_repeat/app_repeat.o 00:02:23.118 CC examples/blob/cli/blobcli.o 00:02:23.118 CC examples/blob/hello_world/hello_blob.o 00:02:23.118 CC test/dma/test_dma/test_dma.o 00:02:23.118 CC examples/nvmf/nvmf/nvmf.o 00:02:23.118 CC test/bdev/bdevio/bdevio.o 00:02:23.118 CC test/blobfs/mkfs/mkfs.o 00:02:23.118 CC test/accel/dif/dif.o 00:02:23.118 CC test/event/scheduler/scheduler.o 00:02:23.118 CC test/app/bdev_svc/bdev_svc.o 00:02:23.118 LINK spdk_lspci 00:02:23.386 CC test/lvol/esnap/esnap.o 00:02:23.386 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:23.386 CC test/env/mem_callbacks/mem_callbacks.o 00:02:23.386 LINK rpc_client_test 00:02:23.386 LINK spdk_nvme_discover 00:02:23.386 LINK nvmf_tgt 00:02:23.386 LINK interrupt_tgt 00:02:23.386 LINK led 00:02:23.649 LINK lsvmd 00:02:23.649 LINK reactor 00:02:23.649 LINK vhost 00:02:23.649 LINK cmb_copy 00:02:23.649 LINK jsoncat 00:02:23.649 LINK zipf 00:02:23.649 LINK env_dpdk_post_init 00:02:23.649 LINK vtophys 00:02:23.649 LINK spdk_trace_record 00:02:23.649 LINK iscsi_tgt 00:02:23.649 LINK histogram_perf 00:02:23.649 LINK poller_perf 00:02:23.649 LINK err_injection 00:02:23.650 LINK boot_partition 00:02:23.650 LINK reactor_perf 00:02:23.650 LINK ioat_perf 00:02:23.650 LINK hello_world 00:02:23.650 LINK doorbell_aers 00:02:23.650 LINK event_perf 00:02:23.650 LINK pmr_persistence 00:02:23.650 LINK stub 00:02:23.650 CXX test/cpp_headers/iscsi_spec.o 00:02:23.650 LINK startup 00:02:23.650 CXX test/cpp_headers/json.o 00:02:23.650 LINK connect_stress 00:02:23.650 CXX test/cpp_headers/jsonrpc.o 00:02:23.650 LINK app_repeat 00:02:23.650 CXX test/cpp_headers/likely.o 00:02:23.650 CXX test/cpp_headers/log.o 00:02:23.650 CXX test/cpp_headers/lvol.o 00:02:23.650 CXX test/cpp_headers/memory.o 00:02:23.650 CXX test/cpp_headers/mmio.o 00:02:23.650 CXX test/cpp_headers/nbd.o 00:02:23.650 CXX test/cpp_headers/notify.o 00:02:23.650 LINK verify 00:02:23.650 CXX test/cpp_headers/nvme.o 00:02:23.650 CXX test/cpp_headers/nvme_intel.o 00:02:23.650 CXX test/cpp_headers/nvme_ocssd.o 00:02:23.650 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:23.650 CXX test/cpp_headers/nvme_spec.o 00:02:23.650 LINK fused_ordering 00:02:23.650 LINK spdk_tgt 00:02:23.650 LINK hotplug 00:02:23.650 LINK bdev_svc 00:02:23.650 CXX test/cpp_headers/nvme_zns.o 00:02:23.650 CXX test/cpp_headers/nvmf_cmd.o 00:02:23.650 LINK mkfs 00:02:23.650 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:23.650 CXX test/cpp_headers/nvmf.o 00:02:23.650 CXX test/cpp_headers/nvmf_spec.o 00:02:23.650 LINK reserve 00:02:23.650 CXX test/cpp_headers/nvmf_transport.o 00:02:23.650 CXX test/cpp_headers/opal.o 00:02:23.650 LINK sgl 00:02:23.650 CXX test/cpp_headers/opal_spec.o 00:02:23.650 LINK reset 00:02:23.650 LINK hello_sock 00:02:23.650 CXX test/cpp_headers/pci_ids.o 00:02:23.650 CXX test/cpp_headers/pipe.o 00:02:23.650 LINK scheduler 00:02:23.650 CXX test/cpp_headers/queue.o 00:02:23.650 CXX test/cpp_headers/reduce.o 00:02:23.650 LINK spdk_dd 00:02:23.650 CXX test/cpp_headers/rpc.o 00:02:23.650 LINK simple_copy 00:02:23.650 CXX test/cpp_headers/scheduler.o 00:02:23.650 CXX test/cpp_headers/scsi.o 00:02:23.650 LINK hello_bdev 00:02:23.650 CXX test/cpp_headers/sock.o 00:02:23.650 CXX test/cpp_headers/scsi_spec.o 00:02:23.650 LINK hello_blob 00:02:23.650 CXX test/cpp_headers/stdinc.o 00:02:23.650 LINK thread 00:02:23.650 CXX test/cpp_headers/string.o 00:02:23.650 LINK nvme_dp 00:02:23.910 CXX test/cpp_headers/thread.o 00:02:23.910 CXX test/cpp_headers/trace.o 00:02:23.910 LINK arbitration 00:02:23.910 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:23.910 CXX test/cpp_headers/trace_parser.o 00:02:23.910 CXX test/cpp_headers/tree.o 00:02:23.910 CXX test/cpp_headers/ublk.o 00:02:23.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:23.910 LINK nvme_compliance 00:02:23.910 LINK nvmf 00:02:23.910 LINK aer 00:02:23.910 LINK reconnect 00:02:23.910 LINK test_dma 00:02:23.910 LINK overhead 00:02:23.910 CXX test/cpp_headers/util.o 00:02:23.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:23.910 CXX test/cpp_headers/uuid.o 00:02:23.910 LINK abort 00:02:23.910 LINK pci_ut 00:02:23.910 CXX test/cpp_headers/version.o 00:02:23.910 LINK dif 00:02:23.910 CXX test/cpp_headers/vfio_user_spec.o 00:02:23.910 CXX test/cpp_headers/vfio_user_pci.o 00:02:23.910 LINK spdk_trace 00:02:23.910 CXX test/cpp_headers/xor.o 00:02:23.910 CXX test/cpp_headers/vmd.o 00:02:23.911 CXX test/cpp_headers/vhost.o 00:02:23.911 LINK fdp 00:02:23.911 CXX test/cpp_headers/zipf.o 00:02:23.911 LINK idxd_perf 00:02:23.911 LINK bdevio 00:02:24.169 LINK accel_perf 00:02:24.169 LINK nvme_manage 00:02:24.169 LINK spdk_nvme 00:02:24.169 LINK spdk_bdev 00:02:24.169 LINK blobcli 00:02:24.169 LINK nvme_fuzz 00:02:24.428 LINK bdevperf 00:02:24.428 LINK mem_callbacks 00:02:24.428 LINK spdk_nvme_perf 00:02:24.428 LINK spdk_top 00:02:24.428 LINK spdk_nvme_identify 00:02:24.428 LINK vhost_fuzz 00:02:24.428 LINK memory_ut 00:02:24.688 LINK cuse 00:02:25.257 LINK iscsi_fuzz 00:02:27.164 LINK esnap 00:02:27.164 00:02:27.164 real 0m30.772s 00:02:27.164 user 4m50.944s 00:02:27.164 sys 2m44.294s 00:02:27.164 21:48:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:27.164 21:48:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.164 ************************************ 00:02:27.164 END TEST make 00:02:27.164 ************************************ 00:02:27.424 21:48:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.424 21:48:38 -- nvmf/common.sh@7 -- # uname -s 00:02:27.424 21:48:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.424 21:48:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.424 21:48:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.424 21:48:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.424 21:48:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.424 21:48:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.424 21:48:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.424 21:48:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.424 21:48:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.424 21:48:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.424 21:48:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:02:27.424 21:48:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:02:27.424 21:48:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.424 21:48:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.424 21:48:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.424 21:48:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:27.424 21:48:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.424 21:48:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.424 21:48:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.424 21:48:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.424 21:48:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.424 21:48:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.424 21:48:38 -- paths/export.sh@5 -- # export PATH 00:02:27.424 21:48:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.424 21:48:38 -- nvmf/common.sh@46 -- # : 0 00:02:27.424 21:48:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:27.424 21:48:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:27.424 21:48:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:27.424 21:48:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.424 21:48:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.424 21:48:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:27.424 21:48:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:27.424 21:48:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:27.424 21:48:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.424 21:48:38 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.424 21:48:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.424 21:48:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.424 21:48:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:27.424 21:48:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.424 21:48:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:27.424 21:48:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.424 21:48:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.424 21:48:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.424 21:48:38 -- spdk/autotest.sh@48 -- # udevadm_pid=1938353 00:02:27.424 21:48:38 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:27.424 21:48:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.424 21:48:38 -- spdk/autotest.sh@54 -- # echo 1938355 00:02:27.424 21:48:38 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:27.424 21:48:38 -- spdk/autotest.sh@56 -- # echo 1938356 00:02:27.424 21:48:38 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:02:27.424 21:48:38 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:27.424 21:48:38 -- spdk/autotest.sh@60 -- # echo 1938357 00:02:27.424 21:48:38 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:27.424 21:48:38 -- spdk/autotest.sh@62 -- # echo 1938358 00:02:27.424 21:48:38 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:02:27.424 21:48:38 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:27.424 21:48:38 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:27.424 21:48:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:27.424 21:48:38 -- common/autotest_common.sh@10 -- # set +x 00:02:27.424 21:48:38 -- spdk/autotest.sh@70 -- # create_test_list 00:02:27.424 21:48:38 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:27.424 21:48:38 -- common/autotest_common.sh@10 -- # set +x 00:02:27.424 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:27.424 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:27.424 21:48:38 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:27.424 21:48:38 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.424 21:48:38 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.424 21:48:38 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:27.424 21:48:38 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:27.424 21:48:38 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:27.424 21:48:38 -- common/autotest_common.sh@1440 -- # uname 00:02:27.424 21:48:38 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:27.424 21:48:38 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:27.424 21:48:38 -- common/autotest_common.sh@1460 -- # uname 00:02:27.424 21:48:38 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:27.424 21:48:38 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:27.424 21:48:38 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:27.424 21:48:38 -- spdk/autotest.sh@83 -- # hash lcov 00:02:27.424 21:48:38 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:27.424 21:48:38 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:27.424 --rc lcov_branch_coverage=1 00:02:27.424 --rc lcov_function_coverage=1 00:02:27.424 --rc genhtml_branch_coverage=1 00:02:27.424 --rc genhtml_function_coverage=1 00:02:27.424 --rc genhtml_legend=1 00:02:27.424 --rc geninfo_all_blocks=1 00:02:27.424 ' 00:02:27.424 21:48:38 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:27.424 --rc lcov_branch_coverage=1 00:02:27.424 --rc lcov_function_coverage=1 00:02:27.424 --rc genhtml_branch_coverage=1 00:02:27.424 --rc genhtml_function_coverage=1 00:02:27.424 --rc genhtml_legend=1 00:02:27.424 --rc geninfo_all_blocks=1 00:02:27.424 ' 00:02:27.424 21:48:38 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:27.424 --rc lcov_branch_coverage=1 00:02:27.424 --rc lcov_function_coverage=1 00:02:27.424 --rc genhtml_branch_coverage=1 00:02:27.424 --rc genhtml_function_coverage=1 00:02:27.424 --rc genhtml_legend=1 00:02:27.424 --rc geninfo_all_blocks=1 00:02:27.424 --no-external' 00:02:27.424 21:48:38 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:27.424 --rc lcov_branch_coverage=1 00:02:27.424 --rc lcov_function_coverage=1 00:02:27.424 --rc genhtml_branch_coverage=1 00:02:27.424 --rc genhtml_function_coverage=1 00:02:27.424 --rc genhtml_legend=1 00:02:27.424 --rc geninfo_all_blocks=1 00:02:27.424 --no-external' 00:02:27.425 21:48:38 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:27.684 lcov: LCOV version 1.14 00:02:27.684 21:48:38 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:30.220 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:30.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:30.220 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:30.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:30.220 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:30.220 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:48.317 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:48.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:48.318 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:48.318 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:48.319 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:48.319 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:50.917 21:49:01 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:50.917 21:49:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:50.917 21:49:01 -- common/autotest_common.sh@10 -- # set +x 00:02:50.917 21:49:01 -- spdk/autotest.sh@102 -- # rm -f 00:02:50.917 21:49:01 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:55.103 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.103 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:55.104 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:55.104 21:49:06 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:55.104 21:49:06 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:55.104 21:49:06 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:55.104 21:49:06 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:55.104 21:49:06 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:55.104 21:49:06 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:55.104 21:49:06 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:55.104 21:49:06 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.104 21:49:06 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:55.104 21:49:06 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:55.104 21:49:06 -- spdk/autotest.sh@121 -- # grep -v p 00:02:55.104 21:49:06 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:55.104 21:49:06 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:55.104 21:49:06 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:55.104 21:49:06 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:55.104 21:49:06 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:55.104 21:49:06 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:55.104 No valid GPT data, bailing 00:02:55.104 21:49:06 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:55.104 21:49:06 -- scripts/common.sh@393 -- # pt= 00:02:55.104 21:49:06 -- scripts/common.sh@394 -- # return 1 00:02:55.104 21:49:06 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:55.104 1+0 records in 00:02:55.104 1+0 records out 00:02:55.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00656969 s, 160 MB/s 00:02:55.104 21:49:06 -- spdk/autotest.sh@129 -- # sync 00:02:55.104 21:49:06 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:55.104 21:49:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:55.104 21:49:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:03.213 21:49:13 -- spdk/autotest.sh@135 -- # uname -s 00:03:03.214 21:49:13 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:03.214 21:49:13 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:03.214 21:49:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:03.214 21:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:03.214 21:49:13 -- common/autotest_common.sh@10 -- # set +x 00:03:03.214 ************************************ 00:03:03.214 START TEST setup.sh 00:03:03.214 ************************************ 00:03:03.214 21:49:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:03.214 * Looking for test storage... 00:03:03.214 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:03.214 21:49:13 -- setup/test-setup.sh@10 -- # uname -s 00:03:03.214 21:49:13 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:03.214 21:49:13 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:03.214 21:49:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:03.214 21:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:03.214 21:49:13 -- common/autotest_common.sh@10 -- # set +x 00:03:03.214 ************************************ 00:03:03.214 START TEST acl 00:03:03.214 ************************************ 00:03:03.214 21:49:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:03.214 * Looking for test storage... 00:03:03.214 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:03.214 21:49:13 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:03.214 21:49:13 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:03.214 21:49:13 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:03.214 21:49:13 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:03.214 21:49:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:03.214 21:49:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:03.214 21:49:13 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:03.214 21:49:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:03.214 21:49:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:03.214 21:49:13 -- setup/acl.sh@12 -- # devs=() 00:03:03.214 21:49:13 -- setup/acl.sh@12 -- # declare -a devs 00:03:03.214 21:49:13 -- setup/acl.sh@13 -- # drivers=() 00:03:03.214 21:49:13 -- setup/acl.sh@13 -- # declare -A drivers 00:03:03.214 21:49:13 -- setup/acl.sh@51 -- # setup reset 00:03:03.214 21:49:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:03.214 21:49:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.408 21:49:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:07.408 21:49:18 -- setup/acl.sh@16 -- # local dev driver 00:03:07.408 21:49:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.408 21:49:18 -- setup/acl.sh@15 -- # setup output status 00:03:07.408 21:49:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.408 21:49:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:11.601 Hugepages 00:03:11.601 node hugesize free / total 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 00:03:11.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:21 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:21 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:21 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:11.601 21:49:21 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:21 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:21 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.601 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.601 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.601 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.602 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.602 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.602 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.602 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.602 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.602 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # continue 00:03:11.602 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.602 21:49:22 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:11.602 21:49:22 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.602 21:49:22 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:11.602 21:49:22 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.602 21:49:22 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.602 21:49:22 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.602 21:49:22 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:11.602 21:49:22 -- setup/acl.sh@54 -- # run_test denied denied 00:03:11.602 21:49:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:11.602 21:49:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:11.602 21:49:22 -- common/autotest_common.sh@10 -- # set +x 00:03:11.602 ************************************ 00:03:11.602 START TEST denied 00:03:11.602 ************************************ 00:03:11.602 21:49:22 -- common/autotest_common.sh@1104 -- # denied 00:03:11.602 21:49:22 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:11.602 21:49:22 -- setup/acl.sh@38 -- # setup output config 00:03:11.602 21:49:22 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:11.602 21:49:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.602 21:49:22 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:15.792 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:15.792 21:49:26 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:15.792 21:49:26 -- setup/acl.sh@28 -- # local dev driver 00:03:15.792 21:49:26 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:15.792 21:49:26 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:15.792 21:49:26 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:15.792 21:49:26 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:15.792 21:49:26 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:15.792 21:49:26 -- setup/acl.sh@41 -- # setup reset 00:03:15.792 21:49:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.792 21:49:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.158 00:03:21.158 real 0m9.580s 00:03:21.158 user 0m2.984s 00:03:21.158 sys 0m5.933s 00:03:21.158 21:49:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.158 21:49:31 -- common/autotest_common.sh@10 -- # set +x 00:03:21.158 ************************************ 00:03:21.158 END TEST denied 00:03:21.158 ************************************ 00:03:21.158 21:49:31 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:21.158 21:49:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.158 21:49:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.158 21:49:31 -- common/autotest_common.sh@10 -- # set +x 00:03:21.158 ************************************ 00:03:21.158 START TEST allowed 00:03:21.158 ************************************ 00:03:21.158 21:49:31 -- common/autotest_common.sh@1104 -- # allowed 00:03:21.158 21:49:31 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:21.158 21:49:31 -- setup/acl.sh@45 -- # setup output config 00:03:21.158 21:49:31 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:21.158 21:49:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.158 21:49:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:26.421 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:26.421 21:49:37 -- setup/acl.sh@47 -- # verify 00:03:26.421 21:49:37 -- setup/acl.sh@28 -- # local dev driver 00:03:26.421 21:49:37 -- setup/acl.sh@48 -- # setup reset 00:03:26.421 21:49:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.421 21:49:37 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.609 00:03:30.609 real 0m9.264s 00:03:30.609 user 0m2.162s 00:03:30.609 sys 0m4.755s 00:03:30.609 21:49:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.609 21:49:41 -- common/autotest_common.sh@10 -- # set +x 00:03:30.609 ************************************ 00:03:30.609 END TEST allowed 00:03:30.609 ************************************ 00:03:30.609 00:03:30.609 real 0m27.751s 00:03:30.609 user 0m8.306s 00:03:30.609 sys 0m16.763s 00:03:30.609 21:49:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.609 21:49:41 -- common/autotest_common.sh@10 -- # set +x 00:03:30.609 ************************************ 00:03:30.609 END TEST acl 00:03:30.609 ************************************ 00:03:30.609 21:49:41 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:30.609 21:49:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.609 21:49:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.609 21:49:41 -- common/autotest_common.sh@10 -- # set +x 00:03:30.609 ************************************ 00:03:30.609 START TEST hugepages 00:03:30.609 ************************************ 00:03:30.609 21:49:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:30.609 * Looking for test storage... 00:03:30.609 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:30.609 21:49:41 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:30.609 21:49:41 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:30.609 21:49:41 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:30.609 21:49:41 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:30.609 21:49:41 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:30.609 21:49:41 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:30.609 21:49:41 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:30.609 21:49:41 -- setup/common.sh@18 -- # local node= 00:03:30.609 21:49:41 -- setup/common.sh@19 -- # local var val 00:03:30.609 21:49:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.609 21:49:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.609 21:49:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.609 21:49:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.609 21:49:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.609 21:49:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 34263724 kB' 'MemAvailable: 39407556 kB' 'Buffers: 4096 kB' 'Cached: 17316432 kB' 'SwapCached: 0 kB' 'Active: 13134636 kB' 'Inactive: 4709516 kB' 'Active(anon): 12656280 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526884 kB' 'Mapped: 199328 kB' 'Shmem: 12132656 kB' 'KReclaimable: 603452 kB' 'Slab: 1317968 kB' 'SReclaimable: 603452 kB' 'SUnreclaim: 714516 kB' 'KernelStack: 22528 kB' 'PageTables: 8888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439048 kB' 'Committed_AS: 14150700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.609 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.609 21:49:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.610 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.610 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.611 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.611 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.611 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.611 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # continue 00:03:30.611 21:49:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.611 21:49:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.611 21:49:41 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:30.611 21:49:41 -- setup/common.sh@33 -- # echo 2048 00:03:30.611 21:49:41 -- setup/common.sh@33 -- # return 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:30.611 21:49:41 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:30.611 21:49:41 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:30.611 21:49:41 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:30.611 21:49:41 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:30.611 21:49:41 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:30.611 21:49:41 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:30.611 21:49:41 -- setup/hugepages.sh@207 -- # get_nodes 00:03:30.611 21:49:41 -- setup/hugepages.sh@27 -- # local node 00:03:30.611 21:49:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.611 21:49:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:30.611 21:49:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.611 21:49:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.611 21:49:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.611 21:49:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.611 21:49:41 -- setup/hugepages.sh@208 -- # clear_hp 00:03:30.611 21:49:41 -- setup/hugepages.sh@37 -- # local node hp 00:03:30.611 21:49:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.611 21:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.611 21:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.611 21:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.611 21:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.611 21:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.611 21:49:41 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.611 21:49:41 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.611 21:49:41 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:30.611 21:49:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.611 21:49:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.611 21:49:41 -- common/autotest_common.sh@10 -- # set +x 00:03:30.611 ************************************ 00:03:30.611 START TEST default_setup 00:03:30.611 ************************************ 00:03:30.611 21:49:41 -- common/autotest_common.sh@1104 -- # default_setup 00:03:30.611 21:49:41 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.611 21:49:41 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:30.611 21:49:41 -- setup/hugepages.sh@51 -- # shift 00:03:30.611 21:49:41 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:30.611 21:49:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.611 21:49:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.611 21:49:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.611 21:49:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:30.611 21:49:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.611 21:49:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.611 21:49:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.611 21:49:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.611 21:49:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.611 21:49:41 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:30.611 21:49:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.611 21:49:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:30.611 21:49:41 -- setup/hugepages.sh@73 -- # return 0 00:03:30.611 21:49:41 -- setup/hugepages.sh@137 -- # setup output 00:03:30.611 21:49:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.611 21:49:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:33.902 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.902 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:35.808 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.071 21:49:47 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:36.071 21:49:47 -- setup/hugepages.sh@89 -- # local node 00:03:36.071 21:49:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.071 21:49:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.071 21:49:47 -- setup/hugepages.sh@92 -- # local surp 00:03:36.071 21:49:47 -- setup/hugepages.sh@93 -- # local resv 00:03:36.071 21:49:47 -- setup/hugepages.sh@94 -- # local anon 00:03:36.071 21:49:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.071 21:49:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.071 21:49:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.071 21:49:47 -- setup/common.sh@18 -- # local node= 00:03:36.071 21:49:47 -- setup/common.sh@19 -- # local var val 00:03:36.071 21:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.071 21:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.071 21:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.071 21:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.071 21:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.071 21:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.071 21:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36418128 kB' 'MemAvailable: 41561856 kB' 'Buffers: 4096 kB' 'Cached: 17316568 kB' 'SwapCached: 0 kB' 'Active: 13151932 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673576 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544336 kB' 'Mapped: 199360 kB' 'Shmem: 12132792 kB' 'KReclaimable: 603348 kB' 'Slab: 1315864 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712516 kB' 'KernelStack: 22736 kB' 'PageTables: 9076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14210944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.071 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.071 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.072 21:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.072 21:49:47 -- setup/common.sh@33 -- # echo 0 00:03:36.072 21:49:47 -- setup/common.sh@33 -- # return 0 00:03:36.072 21:49:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.072 21:49:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.072 21:49:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.072 21:49:47 -- setup/common.sh@18 -- # local node= 00:03:36.072 21:49:47 -- setup/common.sh@19 -- # local var val 00:03:36.072 21:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.072 21:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.072 21:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.072 21:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.072 21:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.072 21:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.072 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36420708 kB' 'MemAvailable: 41564436 kB' 'Buffers: 4096 kB' 'Cached: 17316572 kB' 'SwapCached: 0 kB' 'Active: 13151988 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673632 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544932 kB' 'Mapped: 199864 kB' 'Shmem: 12132796 kB' 'KReclaimable: 603348 kB' 'Slab: 1315824 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712476 kB' 'KernelStack: 22736 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14173308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.073 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.073 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.074 21:49:47 -- setup/common.sh@33 -- # echo 0 00:03:36.074 21:49:47 -- setup/common.sh@33 -- # return 0 00:03:36.074 21:49:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.074 21:49:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.074 21:49:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.074 21:49:47 -- setup/common.sh@18 -- # local node= 00:03:36.074 21:49:47 -- setup/common.sh@19 -- # local var val 00:03:36.074 21:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.074 21:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.074 21:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.074 21:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.074 21:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.074 21:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36416328 kB' 'MemAvailable: 41560056 kB' 'Buffers: 4096 kB' 'Cached: 17316584 kB' 'SwapCached: 0 kB' 'Active: 13155368 kB' 'Inactive: 4709516 kB' 'Active(anon): 12677012 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 548220 kB' 'Mapped: 199924 kB' 'Shmem: 12132808 kB' 'KReclaimable: 603348 kB' 'Slab: 1315924 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712576 kB' 'KernelStack: 22768 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14176512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.074 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.074 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.075 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.075 21:49:47 -- setup/common.sh@33 -- # echo 0 00:03:36.075 21:49:47 -- setup/common.sh@33 -- # return 0 00:03:36.075 21:49:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.075 21:49:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.075 nr_hugepages=1024 00:03:36.075 21:49:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.075 resv_hugepages=0 00:03:36.075 21:49:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.075 surplus_hugepages=0 00:03:36.075 21:49:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.075 anon_hugepages=0 00:03:36.075 21:49:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.075 21:49:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.075 21:49:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.075 21:49:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.075 21:49:47 -- setup/common.sh@18 -- # local node= 00:03:36.075 21:49:47 -- setup/common.sh@19 -- # local var val 00:03:36.075 21:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.075 21:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.075 21:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.075 21:49:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.075 21:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.075 21:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.075 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36412360 kB' 'MemAvailable: 41556088 kB' 'Buffers: 4096 kB' 'Cached: 17316596 kB' 'SwapCached: 0 kB' 'Active: 13157268 kB' 'Inactive: 4709516 kB' 'Active(anon): 12678912 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 549564 kB' 'Mapped: 200272 kB' 'Shmem: 12132820 kB' 'KReclaimable: 603348 kB' 'Slab: 1315924 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712576 kB' 'KernelStack: 22720 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14178372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220696 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.076 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.076 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.077 21:49:47 -- setup/common.sh@33 -- # echo 1024 00:03:36.077 21:49:47 -- setup/common.sh@33 -- # return 0 00:03:36.077 21:49:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.077 21:49:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.077 21:49:47 -- setup/hugepages.sh@27 -- # local node 00:03:36.077 21:49:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.077 21:49:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:36.077 21:49:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.077 21:49:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:36.077 21:49:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.077 21:49:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.077 21:49:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.077 21:49:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.077 21:49:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.077 21:49:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.077 21:49:47 -- setup/common.sh@18 -- # local node=0 00:03:36.077 21:49:47 -- setup/common.sh@19 -- # local var val 00:03:36.077 21:49:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.077 21:49:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.077 21:49:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.077 21:49:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.077 21:49:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.077 21:49:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21326108 kB' 'MemUsed: 11265976 kB' 'SwapCached: 0 kB' 'Active: 7186660 kB' 'Inactive: 569080 kB' 'Active(anon): 6909348 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7599560 kB' 'Mapped: 69448 kB' 'AnonPages: 159520 kB' 'Shmem: 6753168 kB' 'KernelStack: 11368 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 730720 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 341524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.077 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.077 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # continue 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.078 21:49:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.078 21:49:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.078 21:49:47 -- setup/common.sh@33 -- # echo 0 00:03:36.078 21:49:47 -- setup/common.sh@33 -- # return 0 00:03:36.078 21:49:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.078 21:49:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.078 21:49:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.078 21:49:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.078 21:49:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:36.078 node0=1024 expecting 1024 00:03:36.078 21:49:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:36.078 00:03:36.078 real 0m5.909s 00:03:36.078 user 0m1.299s 00:03:36.078 sys 0m2.651s 00:03:36.078 21:49:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.078 21:49:47 -- common/autotest_common.sh@10 -- # set +x 00:03:36.078 ************************************ 00:03:36.078 END TEST default_setup 00:03:36.078 ************************************ 00:03:36.078 21:49:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:36.078 21:49:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:36.078 21:49:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:36.078 21:49:47 -- common/autotest_common.sh@10 -- # set +x 00:03:36.078 ************************************ 00:03:36.078 START TEST per_node_1G_alloc 00:03:36.078 ************************************ 00:03:36.078 21:49:47 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:36.078 21:49:47 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:36.078 21:49:47 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:36.078 21:49:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:36.078 21:49:47 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:36.078 21:49:47 -- setup/hugepages.sh@51 -- # shift 00:03:36.078 21:49:47 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:36.078 21:49:47 -- setup/hugepages.sh@52 -- # local node_ids 00:03:36.078 21:49:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.078 21:49:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:36.078 21:49:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:36.078 21:49:47 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:36.078 21:49:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.078 21:49:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:36.078 21:49:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.078 21:49:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.078 21:49:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.078 21:49:47 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:36.078 21:49:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.078 21:49:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:36.078 21:49:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:36.078 21:49:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:36.078 21:49:47 -- setup/hugepages.sh@73 -- # return 0 00:03:36.078 21:49:47 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:36.078 21:49:47 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:36.078 21:49:47 -- setup/hugepages.sh@146 -- # setup output 00:03:36.078 21:49:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.078 21:49:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:40.280 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.280 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.280 21:49:51 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:40.280 21:49:51 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:40.280 21:49:51 -- setup/hugepages.sh@89 -- # local node 00:03:40.280 21:49:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.280 21:49:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.280 21:49:51 -- setup/hugepages.sh@92 -- # local surp 00:03:40.280 21:49:51 -- setup/hugepages.sh@93 -- # local resv 00:03:40.280 21:49:51 -- setup/hugepages.sh@94 -- # local anon 00:03:40.280 21:49:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.280 21:49:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.280 21:49:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.280 21:49:51 -- setup/common.sh@18 -- # local node= 00:03:40.280 21:49:51 -- setup/common.sh@19 -- # local var val 00:03:40.280 21:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.280 21:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.280 21:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.280 21:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.280 21:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.280 21:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36407764 kB' 'MemAvailable: 41551492 kB' 'Buffers: 4096 kB' 'Cached: 17316704 kB' 'SwapCached: 0 kB' 'Active: 13149768 kB' 'Inactive: 4709516 kB' 'Active(anon): 12671412 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542068 kB' 'Mapped: 198308 kB' 'Shmem: 12132928 kB' 'KReclaimable: 603348 kB' 'Slab: 1315616 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712268 kB' 'KernelStack: 22496 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14159000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.280 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.280 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.281 21:49:51 -- setup/common.sh@33 -- # echo 0 00:03:40.281 21:49:51 -- setup/common.sh@33 -- # return 0 00:03:40.281 21:49:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.281 21:49:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.281 21:49:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.281 21:49:51 -- setup/common.sh@18 -- # local node= 00:03:40.281 21:49:51 -- setup/common.sh@19 -- # local var val 00:03:40.281 21:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.281 21:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.281 21:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.281 21:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.281 21:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.281 21:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36407764 kB' 'MemAvailable: 41551492 kB' 'Buffers: 4096 kB' 'Cached: 17316704 kB' 'SwapCached: 0 kB' 'Active: 13150792 kB' 'Inactive: 4709516 kB' 'Active(anon): 12672436 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542588 kB' 'Mapped: 198384 kB' 'Shmem: 12132928 kB' 'KReclaimable: 603348 kB' 'Slab: 1315640 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712292 kB' 'KernelStack: 22544 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14159012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.281 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.281 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.282 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.282 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.283 21:49:51 -- setup/common.sh@33 -- # echo 0 00:03:40.283 21:49:51 -- setup/common.sh@33 -- # return 0 00:03:40.283 21:49:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.283 21:49:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.283 21:49:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.283 21:49:51 -- setup/common.sh@18 -- # local node= 00:03:40.283 21:49:51 -- setup/common.sh@19 -- # local var val 00:03:40.283 21:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.283 21:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.283 21:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.283 21:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.283 21:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.283 21:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36407764 kB' 'MemAvailable: 41551492 kB' 'Buffers: 4096 kB' 'Cached: 17316716 kB' 'SwapCached: 0 kB' 'Active: 13150260 kB' 'Inactive: 4709516 kB' 'Active(anon): 12671904 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542064 kB' 'Mapped: 198356 kB' 'Shmem: 12132940 kB' 'KReclaimable: 603348 kB' 'Slab: 1315640 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712292 kB' 'KernelStack: 22528 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14159024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220676 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.283 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.283 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.284 21:49:51 -- setup/common.sh@33 -- # echo 0 00:03:40.284 21:49:51 -- setup/common.sh@33 -- # return 0 00:03:40.284 21:49:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.284 21:49:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.284 nr_hugepages=1024 00:03:40.284 21:49:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.284 resv_hugepages=0 00:03:40.284 21:49:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.284 surplus_hugepages=0 00:03:40.284 21:49:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.284 anon_hugepages=0 00:03:40.284 21:49:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.284 21:49:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.284 21:49:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.284 21:49:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.284 21:49:51 -- setup/common.sh@18 -- # local node= 00:03:40.284 21:49:51 -- setup/common.sh@19 -- # local var val 00:03:40.284 21:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.284 21:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.284 21:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.284 21:49:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.284 21:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.284 21:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36407764 kB' 'MemAvailable: 41551492 kB' 'Buffers: 4096 kB' 'Cached: 17316716 kB' 'SwapCached: 0 kB' 'Active: 13150104 kB' 'Inactive: 4709516 kB' 'Active(anon): 12671748 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542428 kB' 'Mapped: 198280 kB' 'Shmem: 12132940 kB' 'KReclaimable: 603348 kB' 'Slab: 1315668 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712320 kB' 'KernelStack: 22528 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14159040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220692 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.284 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.284 21:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.285 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.285 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.286 21:49:51 -- setup/common.sh@33 -- # echo 1024 00:03:40.286 21:49:51 -- setup/common.sh@33 -- # return 0 00:03:40.286 21:49:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.286 21:49:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.286 21:49:51 -- setup/hugepages.sh@27 -- # local node 00:03:40.286 21:49:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.286 21:49:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.286 21:49:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.286 21:49:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.286 21:49:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.286 21:49:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.286 21:49:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.286 21:49:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.286 21:49:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.286 21:49:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.286 21:49:51 -- setup/common.sh@18 -- # local node=0 00:03:40.286 21:49:51 -- setup/common.sh@19 -- # local var val 00:03:40.286 21:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.286 21:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.286 21:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.286 21:49:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.286 21:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.286 21:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22371144 kB' 'MemUsed: 10220940 kB' 'SwapCached: 0 kB' 'Active: 7183344 kB' 'Inactive: 569080 kB' 'Active(anon): 6906032 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7599640 kB' 'Mapped: 68932 kB' 'AnonPages: 156208 kB' 'Shmem: 6753248 kB' 'KernelStack: 11368 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 730308 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 341112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.286 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.286 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@33 -- # echo 0 00:03:40.287 21:49:51 -- setup/common.sh@33 -- # return 0 00:03:40.287 21:49:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.287 21:49:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.287 21:49:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.287 21:49:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.287 21:49:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.287 21:49:51 -- setup/common.sh@18 -- # local node=1 00:03:40.287 21:49:51 -- setup/common.sh@19 -- # local var val 00:03:40.287 21:49:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.287 21:49:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.287 21:49:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.287 21:49:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.287 21:49:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.287 21:49:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14036984 kB' 'MemUsed: 13666124 kB' 'SwapCached: 0 kB' 'Active: 5966508 kB' 'Inactive: 4140436 kB' 'Active(anon): 5765464 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9721200 kB' 'Mapped: 129348 kB' 'AnonPages: 385900 kB' 'Shmem: 5379720 kB' 'KernelStack: 11192 kB' 'PageTables: 4264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214152 kB' 'Slab: 585360 kB' 'SReclaimable: 214152 kB' 'SUnreclaim: 371208 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.287 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.287 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # continue 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.288 21:49:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.288 21:49:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.288 21:49:51 -- setup/common.sh@33 -- # echo 0 00:03:40.288 21:49:51 -- setup/common.sh@33 -- # return 0 00:03:40.288 21:49:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.288 21:49:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.288 21:49:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.288 21:49:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.288 node0=512 expecting 512 00:03:40.288 21:49:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.288 21:49:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.288 21:49:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.288 21:49:51 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.288 node1=512 expecting 512 00:03:40.288 21:49:51 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.288 00:03:40.288 real 0m3.931s 00:03:40.288 user 0m1.329s 00:03:40.288 sys 0m2.621s 00:03:40.288 21:49:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.288 21:49:51 -- common/autotest_common.sh@10 -- # set +x 00:03:40.288 ************************************ 00:03:40.288 END TEST per_node_1G_alloc 00:03:40.288 ************************************ 00:03:40.288 21:49:51 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:40.288 21:49:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.288 21:49:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.288 21:49:51 -- common/autotest_common.sh@10 -- # set +x 00:03:40.288 ************************************ 00:03:40.288 START TEST even_2G_alloc 00:03:40.288 ************************************ 00:03:40.288 21:49:51 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:40.288 21:49:51 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:40.288 21:49:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:40.288 21:49:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:40.288 21:49:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.288 21:49:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.288 21:49:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.288 21:49:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:40.288 21:49:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.288 21:49:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.288 21:49:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.288 21:49:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.288 21:49:51 -- setup/hugepages.sh@83 -- # : 512 00:03:40.288 21:49:51 -- setup/hugepages.sh@84 -- # : 1 00:03:40.288 21:49:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.288 21:49:51 -- setup/hugepages.sh@83 -- # : 0 00:03:40.288 21:49:51 -- setup/hugepages.sh@84 -- # : 0 00:03:40.288 21:49:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.288 21:49:51 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:40.288 21:49:51 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:40.288 21:49:51 -- setup/hugepages.sh@153 -- # setup output 00:03:40.288 21:49:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.288 21:49:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:44.483 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.483 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.483 21:49:55 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:44.483 21:49:55 -- setup/hugepages.sh@89 -- # local node 00:03:44.483 21:49:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.484 21:49:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.484 21:49:55 -- setup/hugepages.sh@92 -- # local surp 00:03:44.484 21:49:55 -- setup/hugepages.sh@93 -- # local resv 00:03:44.484 21:49:55 -- setup/hugepages.sh@94 -- # local anon 00:03:44.484 21:49:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.484 21:49:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.484 21:49:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.484 21:49:55 -- setup/common.sh@18 -- # local node= 00:03:44.484 21:49:55 -- setup/common.sh@19 -- # local var val 00:03:44.484 21:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.484 21:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.484 21:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.484 21:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.484 21:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.484 21:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36411020 kB' 'MemAvailable: 41554748 kB' 'Buffers: 4096 kB' 'Cached: 17316848 kB' 'SwapCached: 0 kB' 'Active: 13151356 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673000 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543340 kB' 'Mapped: 198280 kB' 'Shmem: 12133072 kB' 'KReclaimable: 603348 kB' 'Slab: 1315700 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712352 kB' 'KernelStack: 22736 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220884 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.484 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.484 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.485 21:49:55 -- setup/common.sh@33 -- # echo 0 00:03:44.485 21:49:55 -- setup/common.sh@33 -- # return 0 00:03:44.485 21:49:55 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.485 21:49:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.485 21:49:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.485 21:49:55 -- setup/common.sh@18 -- # local node= 00:03:44.485 21:49:55 -- setup/common.sh@19 -- # local var val 00:03:44.485 21:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.485 21:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.485 21:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.485 21:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.485 21:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.485 21:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36411248 kB' 'MemAvailable: 41554976 kB' 'Buffers: 4096 kB' 'Cached: 17316848 kB' 'SwapCached: 0 kB' 'Active: 13151500 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673144 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543528 kB' 'Mapped: 198280 kB' 'Shmem: 12133072 kB' 'KReclaimable: 603348 kB' 'Slab: 1315680 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712332 kB' 'KernelStack: 22640 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220916 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.485 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.485 21:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.486 21:49:55 -- setup/common.sh@33 -- # echo 0 00:03:44.486 21:49:55 -- setup/common.sh@33 -- # return 0 00:03:44.486 21:49:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.486 21:49:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.486 21:49:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.486 21:49:55 -- setup/common.sh@18 -- # local node= 00:03:44.486 21:49:55 -- setup/common.sh@19 -- # local var val 00:03:44.486 21:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.486 21:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.486 21:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.486 21:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.486 21:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.486 21:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36410500 kB' 'MemAvailable: 41554228 kB' 'Buffers: 4096 kB' 'Cached: 17316860 kB' 'SwapCached: 0 kB' 'Active: 13151556 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673200 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543460 kB' 'Mapped: 198280 kB' 'Shmem: 12133084 kB' 'KReclaimable: 603348 kB' 'Slab: 1315768 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712420 kB' 'KernelStack: 22640 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14159752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220884 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.486 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.486 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.487 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.487 21:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.488 21:49:55 -- setup/common.sh@33 -- # echo 0 00:03:44.488 21:49:55 -- setup/common.sh@33 -- # return 0 00:03:44.488 21:49:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.488 21:49:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.488 nr_hugepages=1024 00:03:44.488 21:49:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.488 resv_hugepages=0 00:03:44.488 21:49:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.488 surplus_hugepages=0 00:03:44.488 21:49:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.488 anon_hugepages=0 00:03:44.488 21:49:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.488 21:49:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.488 21:49:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.488 21:49:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.488 21:49:55 -- setup/common.sh@18 -- # local node= 00:03:44.488 21:49:55 -- setup/common.sh@19 -- # local var val 00:03:44.488 21:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.488 21:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.488 21:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.488 21:49:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.488 21:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.488 21:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36410752 kB' 'MemAvailable: 41554480 kB' 'Buffers: 4096 kB' 'Cached: 17316876 kB' 'SwapCached: 0 kB' 'Active: 13151504 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673148 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543420 kB' 'Mapped: 198280 kB' 'Shmem: 12133100 kB' 'KReclaimable: 603348 kB' 'Slab: 1315768 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712420 kB' 'KernelStack: 22512 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14159964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.488 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.488 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.489 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.489 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.489 21:49:55 -- setup/common.sh@33 -- # echo 1024 00:03:44.489 21:49:55 -- setup/common.sh@33 -- # return 0 00:03:44.489 21:49:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.489 21:49:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.489 21:49:55 -- setup/hugepages.sh@27 -- # local node 00:03:44.489 21:49:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.489 21:49:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.490 21:49:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.490 21:49:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.490 21:49:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.490 21:49:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.490 21:49:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.490 21:49:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.490 21:49:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.490 21:49:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.490 21:49:55 -- setup/common.sh@18 -- # local node=0 00:03:44.490 21:49:55 -- setup/common.sh@19 -- # local var val 00:03:44.490 21:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.490 21:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.490 21:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.490 21:49:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.490 21:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.490 21:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22377320 kB' 'MemUsed: 10214764 kB' 'SwapCached: 0 kB' 'Active: 7183136 kB' 'Inactive: 569080 kB' 'Active(anon): 6905824 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7599708 kB' 'Mapped: 68932 kB' 'AnonPages: 155744 kB' 'Shmem: 6753316 kB' 'KernelStack: 11336 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 730448 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 341252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.490 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.490 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@33 -- # echo 0 00:03:44.491 21:49:55 -- setup/common.sh@33 -- # return 0 00:03:44.491 21:49:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.491 21:49:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.491 21:49:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.491 21:49:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:44.491 21:49:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.491 21:49:55 -- setup/common.sh@18 -- # local node=1 00:03:44.491 21:49:55 -- setup/common.sh@19 -- # local var val 00:03:44.491 21:49:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.491 21:49:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.491 21:49:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:44.491 21:49:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:44.491 21:49:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.491 21:49:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14032676 kB' 'MemUsed: 13670432 kB' 'SwapCached: 0 kB' 'Active: 5967416 kB' 'Inactive: 4140436 kB' 'Active(anon): 5766372 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9721264 kB' 'Mapped: 129348 kB' 'AnonPages: 386756 kB' 'Shmem: 5379784 kB' 'KernelStack: 11208 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214152 kB' 'Slab: 585432 kB' 'SReclaimable: 214152 kB' 'SUnreclaim: 371280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.491 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.491 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # continue 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.492 21:49:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.492 21:49:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.492 21:49:55 -- setup/common.sh@33 -- # echo 0 00:03:44.492 21:49:55 -- setup/common.sh@33 -- # return 0 00:03:44.492 21:49:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.492 21:49:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.492 21:49:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.492 21:49:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.492 node0=512 expecting 512 00:03:44.492 21:49:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.492 21:49:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.492 21:49:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.492 21:49:55 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:44.492 node1=512 expecting 512 00:03:44.492 21:49:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.492 00:03:44.492 real 0m4.258s 00:03:44.492 user 0m1.618s 00:03:44.492 sys 0m2.729s 00:03:44.492 21:49:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.492 21:49:55 -- common/autotest_common.sh@10 -- # set +x 00:03:44.492 ************************************ 00:03:44.492 END TEST even_2G_alloc 00:03:44.492 ************************************ 00:03:44.492 21:49:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:44.492 21:49:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.492 21:49:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.492 21:49:55 -- common/autotest_common.sh@10 -- # set +x 00:03:44.492 ************************************ 00:03:44.492 START TEST odd_alloc 00:03:44.492 ************************************ 00:03:44.492 21:49:55 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:44.492 21:49:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:44.492 21:49:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:44.492 21:49:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:44.492 21:49:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.492 21:49:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.492 21:49:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.492 21:49:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:44.492 21:49:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.492 21:49:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.492 21:49:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.492 21:49:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:44.492 21:49:55 -- setup/hugepages.sh@83 -- # : 513 00:03:44.492 21:49:55 -- setup/hugepages.sh@84 -- # : 1 00:03:44.492 21:49:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:44.492 21:49:55 -- setup/hugepages.sh@83 -- # : 0 00:03:44.492 21:49:55 -- setup/hugepages.sh@84 -- # : 0 00:03:44.492 21:49:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.492 21:49:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:44.492 21:49:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:44.492 21:49:55 -- setup/hugepages.sh@160 -- # setup output 00:03:44.492 21:49:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.492 21:49:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:48.759 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:48.759 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.759 21:49:59 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:48.759 21:49:59 -- setup/hugepages.sh@89 -- # local node 00:03:48.759 21:49:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.759 21:49:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.759 21:49:59 -- setup/hugepages.sh@92 -- # local surp 00:03:48.759 21:49:59 -- setup/hugepages.sh@93 -- # local resv 00:03:48.760 21:49:59 -- setup/hugepages.sh@94 -- # local anon 00:03:48.760 21:49:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.760 21:49:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.760 21:49:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.760 21:49:59 -- setup/common.sh@18 -- # local node= 00:03:48.760 21:49:59 -- setup/common.sh@19 -- # local var val 00:03:48.760 21:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.760 21:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.760 21:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.760 21:49:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.760 21:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.760 21:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36443596 kB' 'MemAvailable: 41587324 kB' 'Buffers: 4096 kB' 'Cached: 17316976 kB' 'SwapCached: 0 kB' 'Active: 13151504 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673148 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 542860 kB' 'Mapped: 198384 kB' 'Shmem: 12133200 kB' 'KReclaimable: 603348 kB' 'Slab: 1314940 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22560 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14160576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220820 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.760 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.760 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.761 21:49:59 -- setup/common.sh@33 -- # echo 0 00:03:48.761 21:49:59 -- setup/common.sh@33 -- # return 0 00:03:48.761 21:49:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:48.761 21:49:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.761 21:49:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.761 21:49:59 -- setup/common.sh@18 -- # local node= 00:03:48.761 21:49:59 -- setup/common.sh@19 -- # local var val 00:03:48.761 21:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.761 21:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.761 21:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.761 21:49:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.761 21:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.761 21:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36443344 kB' 'MemAvailable: 41587072 kB' 'Buffers: 4096 kB' 'Cached: 17316976 kB' 'SwapCached: 0 kB' 'Active: 13151680 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673324 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543500 kB' 'Mapped: 198308 kB' 'Shmem: 12133200 kB' 'KReclaimable: 603348 kB' 'Slab: 1314948 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711600 kB' 'KernelStack: 22560 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14160588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.761 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.761 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.762 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.762 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.762 21:49:59 -- setup/common.sh@33 -- # echo 0 00:03:48.762 21:49:59 -- setup/common.sh@33 -- # return 0 00:03:48.763 21:49:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:48.763 21:49:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.763 21:49:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.763 21:49:59 -- setup/common.sh@18 -- # local node= 00:03:48.763 21:49:59 -- setup/common.sh@19 -- # local var val 00:03:48.763 21:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.763 21:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.763 21:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.763 21:49:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.763 21:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.763 21:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36442924 kB' 'MemAvailable: 41586652 kB' 'Buffers: 4096 kB' 'Cached: 17316980 kB' 'SwapCached: 0 kB' 'Active: 13151208 kB' 'Inactive: 4709516 kB' 'Active(anon): 12672852 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543000 kB' 'Mapped: 198284 kB' 'Shmem: 12133204 kB' 'KReclaimable: 603348 kB' 'Slab: 1314940 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22528 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14160604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.763 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.763 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.764 21:49:59 -- setup/common.sh@33 -- # echo 0 00:03:48.764 21:49:59 -- setup/common.sh@33 -- # return 0 00:03:48.764 21:49:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:48.764 21:49:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:48.764 nr_hugepages=1025 00:03:48.764 21:49:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.764 resv_hugepages=0 00:03:48.764 21:49:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.764 surplus_hugepages=0 00:03:48.764 21:49:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.764 anon_hugepages=0 00:03:48.764 21:49:59 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.764 21:49:59 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:48.764 21:49:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.764 21:49:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.764 21:49:59 -- setup/common.sh@18 -- # local node= 00:03:48.764 21:49:59 -- setup/common.sh@19 -- # local var val 00:03:48.764 21:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.764 21:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.764 21:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.764 21:49:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.764 21:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.764 21:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36442924 kB' 'MemAvailable: 41586652 kB' 'Buffers: 4096 kB' 'Cached: 17316980 kB' 'SwapCached: 0 kB' 'Active: 13151208 kB' 'Inactive: 4709516 kB' 'Active(anon): 12672852 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543000 kB' 'Mapped: 198284 kB' 'Shmem: 12133204 kB' 'KReclaimable: 603348 kB' 'Slab: 1314940 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22528 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486600 kB' 'Committed_AS: 14160616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220804 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.764 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.764 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.765 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.765 21:49:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.766 21:49:59 -- setup/common.sh@33 -- # echo 1025 00:03:48.766 21:49:59 -- setup/common.sh@33 -- # return 0 00:03:48.766 21:49:59 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:48.766 21:49:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.766 21:49:59 -- setup/hugepages.sh@27 -- # local node 00:03:48.766 21:49:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.766 21:49:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.766 21:49:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.766 21:49:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:48.766 21:49:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.766 21:49:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.766 21:49:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.766 21:49:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.766 21:49:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.766 21:49:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.766 21:49:59 -- setup/common.sh@18 -- # local node=0 00:03:48.766 21:49:59 -- setup/common.sh@19 -- # local var val 00:03:48.766 21:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.766 21:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.766 21:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.766 21:49:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.766 21:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.766 21:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22416040 kB' 'MemUsed: 10176044 kB' 'SwapCached: 0 kB' 'Active: 7184840 kB' 'Inactive: 569080 kB' 'Active(anon): 6907528 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7599844 kB' 'Mapped: 68932 kB' 'AnonPages: 157432 kB' 'Shmem: 6753452 kB' 'KernelStack: 11384 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 729668 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 340472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.766 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.766 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@33 -- # echo 0 00:03:48.767 21:49:59 -- setup/common.sh@33 -- # return 0 00:03:48.767 21:49:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.767 21:49:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.767 21:49:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.767 21:49:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.767 21:49:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.767 21:49:59 -- setup/common.sh@18 -- # local node=1 00:03:48.767 21:49:59 -- setup/common.sh@19 -- # local var val 00:03:48.767 21:49:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:48.767 21:49:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.767 21:49:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.767 21:49:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.767 21:49:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.767 21:49:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 14028656 kB' 'MemUsed: 13674452 kB' 'SwapCached: 0 kB' 'Active: 5966948 kB' 'Inactive: 4140436 kB' 'Active(anon): 5765904 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9721272 kB' 'Mapped: 129352 kB' 'AnonPages: 386208 kB' 'Shmem: 5379792 kB' 'KernelStack: 11176 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214152 kB' 'Slab: 585272 kB' 'SReclaimable: 214152 kB' 'SUnreclaim: 371120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.767 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.767 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # continue 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:48.768 21:49:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:48.768 21:49:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.768 21:49:59 -- setup/common.sh@33 -- # echo 0 00:03:48.768 21:49:59 -- setup/common.sh@33 -- # return 0 00:03:48.768 21:49:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.768 21:49:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.768 21:49:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.768 21:49:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.768 21:49:59 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:48.768 node0=512 expecting 513 00:03:48.768 21:49:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.768 21:49:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.768 21:49:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.768 21:49:59 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:48.768 node1=513 expecting 512 00:03:48.768 21:49:59 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:48.768 00:03:48.768 real 0m4.010s 00:03:48.768 user 0m1.374s 00:03:48.768 sys 0m2.606s 00:03:48.768 21:49:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.768 21:49:59 -- common/autotest_common.sh@10 -- # set +x 00:03:48.768 ************************************ 00:03:48.768 END TEST odd_alloc 00:03:48.768 ************************************ 00:03:48.768 21:49:59 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:48.768 21:49:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:48.768 21:49:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:48.768 21:49:59 -- common/autotest_common.sh@10 -- # set +x 00:03:48.768 ************************************ 00:03:48.768 START TEST custom_alloc 00:03:48.768 ************************************ 00:03:48.768 21:49:59 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:48.768 21:49:59 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:48.768 21:49:59 -- setup/hugepages.sh@169 -- # local node 00:03:48.768 21:49:59 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:48.769 21:49:59 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:48.769 21:49:59 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:48.769 21:49:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:48.769 21:49:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:48.769 21:49:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.769 21:49:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.769 21:49:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:48.769 21:49:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.769 21:49:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.769 21:49:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.769 21:49:59 -- setup/hugepages.sh@83 -- # : 256 00:03:48.769 21:49:59 -- setup/hugepages.sh@84 -- # : 1 00:03:48.769 21:49:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:48.769 21:49:59 -- setup/hugepages.sh@83 -- # : 0 00:03:48.769 21:49:59 -- setup/hugepages.sh@84 -- # : 0 00:03:48.769 21:49:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:48.769 21:49:59 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:48.769 21:49:59 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:48.769 21:49:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:48.769 21:49:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:48.769 21:49:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.769 21:49:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.769 21:49:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.769 21:49:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.769 21:49:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.769 21:49:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.769 21:49:59 -- setup/hugepages.sh@78 -- # return 0 00:03:48.769 21:49:59 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:48.769 21:49:59 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.769 21:49:59 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.769 21:49:59 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:48.769 21:49:59 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:48.769 21:49:59 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:48.769 21:49:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:48.769 21:49:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:48.769 21:49:59 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:48.769 21:49:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:48.769 21:49:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:48.769 21:49:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:48.769 21:49:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.769 21:49:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:48.769 21:49:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:48.769 21:49:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:48.769 21:49:59 -- setup/hugepages.sh@78 -- # return 0 00:03:48.769 21:49:59 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:48.769 21:49:59 -- setup/hugepages.sh@187 -- # setup output 00:03:48.769 21:49:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.769 21:49:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:52.967 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.967 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.967 21:50:03 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:52.967 21:50:03 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:52.967 21:50:03 -- setup/hugepages.sh@89 -- # local node 00:03:52.967 21:50:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.967 21:50:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.967 21:50:03 -- setup/hugepages.sh@92 -- # local surp 00:03:52.967 21:50:03 -- setup/hugepages.sh@93 -- # local resv 00:03:52.967 21:50:03 -- setup/hugepages.sh@94 -- # local anon 00:03:52.968 21:50:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.968 21:50:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.968 21:50:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.968 21:50:03 -- setup/common.sh@18 -- # local node= 00:03:52.968 21:50:03 -- setup/common.sh@19 -- # local var val 00:03:52.968 21:50:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.968 21:50:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.968 21:50:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.968 21:50:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.968 21:50:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.968 21:50:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35373656 kB' 'MemAvailable: 40517384 kB' 'Buffers: 4096 kB' 'Cached: 17317128 kB' 'SwapCached: 0 kB' 'Active: 13152556 kB' 'Inactive: 4709516 kB' 'Active(anon): 12674200 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544188 kB' 'Mapped: 198300 kB' 'Shmem: 12133352 kB' 'KReclaimable: 603348 kB' 'Slab: 1314940 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711592 kB' 'KernelStack: 22560 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14161544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.968 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.968 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.969 21:50:03 -- setup/common.sh@33 -- # echo 0 00:03:52.969 21:50:03 -- setup/common.sh@33 -- # return 0 00:03:52.969 21:50:03 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.969 21:50:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.969 21:50:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.969 21:50:03 -- setup/common.sh@18 -- # local node= 00:03:52.969 21:50:03 -- setup/common.sh@19 -- # local var val 00:03:52.969 21:50:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.969 21:50:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.969 21:50:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.969 21:50:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.969 21:50:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.969 21:50:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35374204 kB' 'MemAvailable: 40517932 kB' 'Buffers: 4096 kB' 'Cached: 17317132 kB' 'SwapCached: 0 kB' 'Active: 13152180 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673824 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543844 kB' 'Mapped: 198300 kB' 'Shmem: 12133356 kB' 'KReclaimable: 603348 kB' 'Slab: 1314916 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711568 kB' 'KernelStack: 22544 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14161556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.969 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.969 21:50:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.970 21:50:03 -- setup/common.sh@33 -- # echo 0 00:03:52.970 21:50:03 -- setup/common.sh@33 -- # return 0 00:03:52.970 21:50:03 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.970 21:50:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.970 21:50:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.970 21:50:03 -- setup/common.sh@18 -- # local node= 00:03:52.970 21:50:03 -- setup/common.sh@19 -- # local var val 00:03:52.970 21:50:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.970 21:50:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.970 21:50:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.970 21:50:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.970 21:50:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.970 21:50:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35374348 kB' 'MemAvailable: 40518076 kB' 'Buffers: 4096 kB' 'Cached: 17317144 kB' 'SwapCached: 0 kB' 'Active: 13152180 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673824 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543812 kB' 'Mapped: 198300 kB' 'Shmem: 12133368 kB' 'KReclaimable: 603348 kB' 'Slab: 1314976 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711628 kB' 'KernelStack: 22544 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14161572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.970 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.970 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.971 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.971 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.972 21:50:03 -- setup/common.sh@33 -- # echo 0 00:03:52.972 21:50:03 -- setup/common.sh@33 -- # return 0 00:03:52.972 21:50:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.972 21:50:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:52.972 nr_hugepages=1536 00:03:52.972 21:50:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.972 resv_hugepages=0 00:03:52.972 21:50:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.972 surplus_hugepages=0 00:03:52.972 21:50:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.972 anon_hugepages=0 00:03:52.972 21:50:03 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.972 21:50:03 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:52.972 21:50:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.972 21:50:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.972 21:50:03 -- setup/common.sh@18 -- # local node= 00:03:52.972 21:50:03 -- setup/common.sh@19 -- # local var val 00:03:52.972 21:50:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.972 21:50:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.972 21:50:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.972 21:50:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.972 21:50:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.972 21:50:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 35374348 kB' 'MemAvailable: 40518076 kB' 'Buffers: 4096 kB' 'Cached: 17317156 kB' 'SwapCached: 0 kB' 'Active: 13152064 kB' 'Inactive: 4709516 kB' 'Active(anon): 12673708 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 543676 kB' 'Mapped: 198300 kB' 'Shmem: 12133380 kB' 'KReclaimable: 603348 kB' 'Slab: 1314976 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 711628 kB' 'KernelStack: 22528 kB' 'PageTables: 8560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963336 kB' 'Committed_AS: 14161584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.972 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.972 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.973 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.973 21:50:03 -- setup/common.sh@33 -- # echo 1536 00:03:52.973 21:50:03 -- setup/common.sh@33 -- # return 0 00:03:52.973 21:50:03 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:52.973 21:50:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.973 21:50:03 -- setup/hugepages.sh@27 -- # local node 00:03:52.973 21:50:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.973 21:50:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.973 21:50:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.973 21:50:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.973 21:50:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.973 21:50:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.973 21:50:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.973 21:50:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.973 21:50:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.973 21:50:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.973 21:50:03 -- setup/common.sh@18 -- # local node=0 00:03:52.973 21:50:03 -- setup/common.sh@19 -- # local var val 00:03:52.973 21:50:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.973 21:50:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.973 21:50:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.973 21:50:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.973 21:50:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.973 21:50:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.973 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 22411596 kB' 'MemUsed: 10180488 kB' 'SwapCached: 0 kB' 'Active: 7184976 kB' 'Inactive: 569080 kB' 'Active(anon): 6907664 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7599924 kB' 'Mapped: 68932 kB' 'AnonPages: 157392 kB' 'Shmem: 6753532 kB' 'KernelStack: 11320 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 729688 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 340492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.974 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.974 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.974 21:50:03 -- setup/common.sh@33 -- # echo 0 00:03:52.974 21:50:03 -- setup/common.sh@33 -- # return 0 00:03:52.974 21:50:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.974 21:50:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.974 21:50:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.974 21:50:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.974 21:50:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.974 21:50:03 -- setup/common.sh@18 -- # local node=1 00:03:52.974 21:50:03 -- setup/common.sh@19 -- # local var val 00:03:52.974 21:50:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.975 21:50:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.975 21:50:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.975 21:50:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.975 21:50:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.975 21:50:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27703108 kB' 'MemFree: 12963380 kB' 'MemUsed: 14739728 kB' 'SwapCached: 0 kB' 'Active: 5967588 kB' 'Inactive: 4140436 kB' 'Active(anon): 5766544 kB' 'Inactive(anon): 0 kB' 'Active(file): 201044 kB' 'Inactive(file): 4140436 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9721356 kB' 'Mapped: 129368 kB' 'AnonPages: 386740 kB' 'Shmem: 5379876 kB' 'KernelStack: 11224 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 214152 kB' 'Slab: 585288 kB' 'SReclaimable: 214152 kB' 'SUnreclaim: 371136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.975 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.975 21:50:03 -- setup/common.sh@32 -- # continue 00:03:52.976 21:50:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.976 21:50:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.976 21:50:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.976 21:50:03 -- setup/common.sh@33 -- # echo 0 00:03:52.976 21:50:03 -- setup/common.sh@33 -- # return 0 00:03:52.976 21:50:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.976 21:50:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.976 21:50:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.976 21:50:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.976 21:50:03 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.976 node0=512 expecting 512 00:03:52.976 21:50:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.976 21:50:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.976 21:50:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.976 21:50:03 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:52.976 node1=1024 expecting 1024 00:03:52.976 21:50:03 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:52.976 00:03:52.976 real 0m4.219s 00:03:52.976 user 0m1.521s 00:03:52.976 sys 0m2.739s 00:03:52.976 21:50:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.976 21:50:03 -- common/autotest_common.sh@10 -- # set +x 00:03:52.976 ************************************ 00:03:52.976 END TEST custom_alloc 00:03:52.976 ************************************ 00:03:52.976 21:50:03 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:52.976 21:50:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.976 21:50:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.976 21:50:03 -- common/autotest_common.sh@10 -- # set +x 00:03:52.976 ************************************ 00:03:52.976 START TEST no_shrink_alloc 00:03:52.976 ************************************ 00:03:52.976 21:50:03 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:52.976 21:50:03 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:52.976 21:50:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.976 21:50:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.976 21:50:03 -- setup/hugepages.sh@51 -- # shift 00:03:52.976 21:50:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.976 21:50:03 -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.976 21:50:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.976 21:50:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.976 21:50:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.976 21:50:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.976 21:50:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.976 21:50:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.976 21:50:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.976 21:50:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.976 21:50:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.976 21:50:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.976 21:50:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.976 21:50:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:52.976 21:50:03 -- setup/hugepages.sh@73 -- # return 0 00:03:52.976 21:50:03 -- setup/hugepages.sh@198 -- # setup output 00:03:52.976 21:50:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.976 21:50:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:57.175 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.175 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.175 21:50:08 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:57.175 21:50:08 -- setup/hugepages.sh@89 -- # local node 00:03:57.175 21:50:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.175 21:50:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.175 21:50:08 -- setup/hugepages.sh@92 -- # local surp 00:03:57.175 21:50:08 -- setup/hugepages.sh@93 -- # local resv 00:03:57.175 21:50:08 -- setup/hugepages.sh@94 -- # local anon 00:03:57.175 21:50:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.175 21:50:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.175 21:50:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.175 21:50:08 -- setup/common.sh@18 -- # local node= 00:03:57.175 21:50:08 -- setup/common.sh@19 -- # local var val 00:03:57.175 21:50:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.175 21:50:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.175 21:50:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.175 21:50:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.175 21:50:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.175 21:50:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.175 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.175 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.175 21:50:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36361752 kB' 'MemAvailable: 41505480 kB' 'Buffers: 4096 kB' 'Cached: 17317268 kB' 'SwapCached: 0 kB' 'Active: 13153636 kB' 'Inactive: 4709516 kB' 'Active(anon): 12675280 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544676 kB' 'Mapped: 198392 kB' 'Shmem: 12133492 kB' 'KReclaimable: 603348 kB' 'Slab: 1315424 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712076 kB' 'KernelStack: 22576 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220772 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:57.175 21:50:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.175 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.176 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.176 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.177 21:50:08 -- setup/common.sh@33 -- # echo 0 00:03:57.177 21:50:08 -- setup/common.sh@33 -- # return 0 00:03:57.177 21:50:08 -- setup/hugepages.sh@97 -- # anon=0 00:03:57.177 21:50:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.177 21:50:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.177 21:50:08 -- setup/common.sh@18 -- # local node= 00:03:57.177 21:50:08 -- setup/common.sh@19 -- # local var val 00:03:57.177 21:50:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.177 21:50:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.177 21:50:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.177 21:50:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.177 21:50:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.177 21:50:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36363484 kB' 'MemAvailable: 41507212 kB' 'Buffers: 4096 kB' 'Cached: 17317272 kB' 'SwapCached: 0 kB' 'Active: 13153768 kB' 'Inactive: 4709516 kB' 'Active(anon): 12675412 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544872 kB' 'Mapped: 198380 kB' 'Shmem: 12133496 kB' 'KReclaimable: 603348 kB' 'Slab: 1315416 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712068 kB' 'KernelStack: 22560 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.177 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.177 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.178 21:50:08 -- setup/common.sh@33 -- # echo 0 00:03:57.178 21:50:08 -- setup/common.sh@33 -- # return 0 00:03:57.178 21:50:08 -- setup/hugepages.sh@99 -- # surp=0 00:03:57.178 21:50:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.178 21:50:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.178 21:50:08 -- setup/common.sh@18 -- # local node= 00:03:57.178 21:50:08 -- setup/common.sh@19 -- # local var val 00:03:57.178 21:50:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.178 21:50:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.178 21:50:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.178 21:50:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.178 21:50:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.178 21:50:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.178 21:50:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36364444 kB' 'MemAvailable: 41508172 kB' 'Buffers: 4096 kB' 'Cached: 17317272 kB' 'SwapCached: 0 kB' 'Active: 13153300 kB' 'Inactive: 4709516 kB' 'Active(anon): 12674944 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544848 kB' 'Mapped: 198304 kB' 'Shmem: 12133496 kB' 'KReclaimable: 603348 kB' 'Slab: 1315416 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712068 kB' 'KernelStack: 22560 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.178 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.178 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.179 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.179 21:50:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.180 21:50:08 -- setup/common.sh@33 -- # echo 0 00:03:57.180 21:50:08 -- setup/common.sh@33 -- # return 0 00:03:57.180 21:50:08 -- setup/hugepages.sh@100 -- # resv=0 00:03:57.180 21:50:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:57.180 nr_hugepages=1024 00:03:57.180 21:50:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.180 resv_hugepages=0 00:03:57.180 21:50:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.180 surplus_hugepages=0 00:03:57.180 21:50:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.180 anon_hugepages=0 00:03:57.180 21:50:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.180 21:50:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:57.180 21:50:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.180 21:50:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.180 21:50:08 -- setup/common.sh@18 -- # local node= 00:03:57.180 21:50:08 -- setup/common.sh@19 -- # local var val 00:03:57.180 21:50:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.180 21:50:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.180 21:50:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.180 21:50:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.180 21:50:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.180 21:50:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36364948 kB' 'MemAvailable: 41508676 kB' 'Buffers: 4096 kB' 'Cached: 17317272 kB' 'SwapCached: 0 kB' 'Active: 13153376 kB' 'Inactive: 4709516 kB' 'Active(anon): 12675020 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544916 kB' 'Mapped: 198304 kB' 'Shmem: 12133496 kB' 'KReclaimable: 603348 kB' 'Slab: 1315416 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712068 kB' 'KernelStack: 22592 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.180 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.180 21:50:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.181 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.181 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.182 21:50:08 -- setup/common.sh@33 -- # echo 1024 00:03:57.182 21:50:08 -- setup/common.sh@33 -- # return 0 00:03:57.182 21:50:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:57.182 21:50:08 -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.182 21:50:08 -- setup/hugepages.sh@27 -- # local node 00:03:57.182 21:50:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.182 21:50:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:57.182 21:50:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.182 21:50:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:57.182 21:50:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.182 21:50:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.182 21:50:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.182 21:50:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.182 21:50:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.182 21:50:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.182 21:50:08 -- setup/common.sh@18 -- # local node=0 00:03:57.182 21:50:08 -- setup/common.sh@19 -- # local var val 00:03:57.182 21:50:08 -- setup/common.sh@20 -- # local mem_f mem 00:03:57.182 21:50:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.182 21:50:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.182 21:50:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.182 21:50:08 -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.182 21:50:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21359316 kB' 'MemUsed: 11232768 kB' 'SwapCached: 0 kB' 'Active: 7186092 kB' 'Inactive: 569080 kB' 'Active(anon): 6908780 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7599984 kB' 'Mapped: 68932 kB' 'AnonPages: 158436 kB' 'Shmem: 6753592 kB' 'KernelStack: 11352 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 730024 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 340828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.182 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.182 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # continue 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # IFS=': ' 00:03:57.183 21:50:08 -- setup/common.sh@31 -- # read -r var val _ 00:03:57.183 21:50:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.183 21:50:08 -- setup/common.sh@33 -- # echo 0 00:03:57.183 21:50:08 -- setup/common.sh@33 -- # return 0 00:03:57.183 21:50:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.183 21:50:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.183 21:50:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.183 21:50:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.183 21:50:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:57.183 node0=1024 expecting 1024 00:03:57.183 21:50:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:57.183 21:50:08 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:57.183 21:50:08 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:57.183 21:50:08 -- setup/hugepages.sh@202 -- # setup output 00:03:57.183 21:50:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.183 21:50:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:01.380 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.380 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.380 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:01.380 21:50:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:01.380 21:50:12 -- setup/hugepages.sh@89 -- # local node 00:04:01.380 21:50:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.380 21:50:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.380 21:50:12 -- setup/hugepages.sh@92 -- # local surp 00:04:01.380 21:50:12 -- setup/hugepages.sh@93 -- # local resv 00:04:01.380 21:50:12 -- setup/hugepages.sh@94 -- # local anon 00:04:01.380 21:50:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.380 21:50:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.380 21:50:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.380 21:50:12 -- setup/common.sh@18 -- # local node= 00:04:01.380 21:50:12 -- setup/common.sh@19 -- # local var val 00:04:01.380 21:50:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.380 21:50:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.380 21:50:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.380 21:50:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.380 21:50:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.380 21:50:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.380 21:50:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36384056 kB' 'MemAvailable: 41527784 kB' 'Buffers: 4096 kB' 'Cached: 17317392 kB' 'SwapCached: 0 kB' 'Active: 13155440 kB' 'Inactive: 4709516 kB' 'Active(anon): 12677084 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546240 kB' 'Mapped: 198396 kB' 'Shmem: 12133616 kB' 'KReclaimable: 603348 kB' 'Slab: 1316084 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712736 kB' 'KernelStack: 22544 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14196428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220756 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:01.380 21:50:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.380 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.380 21:50:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.380 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.380 21:50:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.380 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.380 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.381 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.381 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.644 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.644 21:50:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.645 21:50:12 -- setup/common.sh@33 -- # echo 0 00:04:01.645 21:50:12 -- setup/common.sh@33 -- # return 0 00:04:01.645 21:50:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.645 21:50:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.645 21:50:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.645 21:50:12 -- setup/common.sh@18 -- # local node= 00:04:01.645 21:50:12 -- setup/common.sh@19 -- # local var val 00:04:01.645 21:50:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.645 21:50:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.645 21:50:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.645 21:50:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.645 21:50:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.645 21:50:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36384340 kB' 'MemAvailable: 41528068 kB' 'Buffers: 4096 kB' 'Cached: 17317400 kB' 'SwapCached: 0 kB' 'Active: 13155328 kB' 'Inactive: 4709516 kB' 'Active(anon): 12676972 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546156 kB' 'Mapped: 198456 kB' 'Shmem: 12133624 kB' 'KReclaimable: 603348 kB' 'Slab: 1316060 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712712 kB' 'KernelStack: 22496 kB' 'PageTables: 8448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220708 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.645 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.645 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.646 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.646 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.647 21:50:12 -- setup/common.sh@33 -- # echo 0 00:04:01.647 21:50:12 -- setup/common.sh@33 -- # return 0 00:04:01.647 21:50:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.647 21:50:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.647 21:50:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.647 21:50:12 -- setup/common.sh@18 -- # local node= 00:04:01.647 21:50:12 -- setup/common.sh@19 -- # local var val 00:04:01.647 21:50:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.647 21:50:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.647 21:50:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.647 21:50:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.647 21:50:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.647 21:50:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36384340 kB' 'MemAvailable: 41528068 kB' 'Buffers: 4096 kB' 'Cached: 17317400 kB' 'SwapCached: 0 kB' 'Active: 13155592 kB' 'Inactive: 4709516 kB' 'Active(anon): 12677236 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546456 kB' 'Mapped: 198396 kB' 'Shmem: 12133624 kB' 'KReclaimable: 603348 kB' 'Slab: 1316060 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712712 kB' 'KernelStack: 22480 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.647 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.647 21:50:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.648 21:50:12 -- setup/common.sh@33 -- # echo 0 00:04:01.648 21:50:12 -- setup/common.sh@33 -- # return 0 00:04:01.648 21:50:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.648 21:50:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.648 nr_hugepages=1024 00:04:01.648 21:50:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.648 resv_hugepages=0 00:04:01.648 21:50:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.648 surplus_hugepages=0 00:04:01.648 21:50:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.648 anon_hugepages=0 00:04:01.648 21:50:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.648 21:50:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.648 21:50:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.648 21:50:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.648 21:50:12 -- setup/common.sh@18 -- # local node= 00:04:01.648 21:50:12 -- setup/common.sh@19 -- # local var val 00:04:01.648 21:50:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.648 21:50:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.648 21:50:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.648 21:50:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.648 21:50:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.648 21:50:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295192 kB' 'MemFree: 36385044 kB' 'MemAvailable: 41528772 kB' 'Buffers: 4096 kB' 'Cached: 17317436 kB' 'SwapCached: 0 kB' 'Active: 13154748 kB' 'Inactive: 4709516 kB' 'Active(anon): 12676392 kB' 'Inactive(anon): 0 kB' 'Active(file): 478356 kB' 'Inactive(file): 4709516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546016 kB' 'Mapped: 198316 kB' 'Shmem: 12133660 kB' 'KReclaimable: 603348 kB' 'Slab: 1316068 kB' 'SReclaimable: 603348 kB' 'SUnreclaim: 712720 kB' 'KernelStack: 22512 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487624 kB' 'Committed_AS: 14162964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 220724 kB' 'VmallocChunk: 0 kB' 'Percpu: 121856 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4255092 kB' 'DirectMap2M: 43665408 kB' 'DirectMap1G: 20971520 kB' 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.648 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.648 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.649 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.649 21:50:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.650 21:50:12 -- setup/common.sh@33 -- # echo 1024 00:04:01.650 21:50:12 -- setup/common.sh@33 -- # return 0 00:04:01.650 21:50:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.650 21:50:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.650 21:50:12 -- setup/hugepages.sh@27 -- # local node 00:04:01.650 21:50:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.650 21:50:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.650 21:50:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.650 21:50:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.650 21:50:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.650 21:50:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.650 21:50:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.650 21:50:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.650 21:50:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.650 21:50:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.650 21:50:12 -- setup/common.sh@18 -- # local node=0 00:04:01.650 21:50:12 -- setup/common.sh@19 -- # local var val 00:04:01.650 21:50:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.650 21:50:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.650 21:50:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.650 21:50:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.650 21:50:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.650 21:50:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32592084 kB' 'MemFree: 21382772 kB' 'MemUsed: 11209312 kB' 'SwapCached: 0 kB' 'Active: 7186640 kB' 'Inactive: 569080 kB' 'Active(anon): 6909328 kB' 'Inactive(anon): 0 kB' 'Active(file): 277312 kB' 'Inactive(file): 569080 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7600044 kB' 'Mapped: 68932 kB' 'AnonPages: 158892 kB' 'Shmem: 6753652 kB' 'KernelStack: 11320 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 389196 kB' 'Slab: 730720 kB' 'SReclaimable: 389196 kB' 'SUnreclaim: 341524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.650 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.650 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # continue 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.651 21:50:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.651 21:50:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.651 21:50:12 -- setup/common.sh@33 -- # echo 0 00:04:01.651 21:50:12 -- setup/common.sh@33 -- # return 0 00:04:01.651 21:50:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.651 21:50:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.651 21:50:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.651 21:50:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.651 21:50:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.651 node0=1024 expecting 1024 00:04:01.651 21:50:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.651 00:04:01.651 real 0m8.868s 00:04:01.651 user 0m3.259s 00:04:01.651 sys 0m5.778s 00:04:01.651 21:50:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.651 21:50:12 -- common/autotest_common.sh@10 -- # set +x 00:04:01.651 ************************************ 00:04:01.651 END TEST no_shrink_alloc 00:04:01.651 ************************************ 00:04:01.651 21:50:12 -- setup/hugepages.sh@217 -- # clear_hp 00:04:01.651 21:50:12 -- setup/hugepages.sh@37 -- # local node hp 00:04:01.651 21:50:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.651 21:50:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.651 21:50:12 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.651 21:50:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.651 21:50:12 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.651 21:50:12 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.651 21:50:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.651 21:50:12 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.651 21:50:12 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.651 21:50:12 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.651 21:50:12 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.651 21:50:12 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.651 00:04:01.652 real 0m31.606s 00:04:01.652 user 0m10.546s 00:04:01.652 sys 0m19.443s 00:04:01.652 21:50:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.652 21:50:12 -- common/autotest_common.sh@10 -- # set +x 00:04:01.652 ************************************ 00:04:01.652 END TEST hugepages 00:04:01.652 ************************************ 00:04:01.652 21:50:12 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:01.652 21:50:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.652 21:50:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.652 21:50:12 -- common/autotest_common.sh@10 -- # set +x 00:04:01.652 ************************************ 00:04:01.652 START TEST driver 00:04:01.652 ************************************ 00:04:01.652 21:50:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:01.911 * Looking for test storage... 00:04:01.911 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:01.911 21:50:12 -- setup/driver.sh@68 -- # setup reset 00:04:01.911 21:50:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.911 21:50:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.485 21:50:18 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:08.485 21:50:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:08.485 21:50:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.485 21:50:18 -- common/autotest_common.sh@10 -- # set +x 00:04:08.485 ************************************ 00:04:08.485 START TEST guess_driver 00:04:08.485 ************************************ 00:04:08.485 21:50:18 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:08.485 21:50:18 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:08.485 21:50:18 -- setup/driver.sh@47 -- # local fail=0 00:04:08.485 21:50:18 -- setup/driver.sh@49 -- # pick_driver 00:04:08.485 21:50:18 -- setup/driver.sh@36 -- # vfio 00:04:08.485 21:50:18 -- setup/driver.sh@21 -- # local iommu_grups 00:04:08.485 21:50:18 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:08.485 21:50:18 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:08.485 21:50:18 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:08.485 21:50:18 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:08.485 21:50:18 -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:08.486 21:50:18 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:08.486 21:50:18 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:08.486 21:50:18 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:08.486 21:50:18 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:08.486 21:50:18 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:08.486 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:08.486 21:50:18 -- setup/driver.sh@30 -- # return 0 00:04:08.486 21:50:18 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:08.486 21:50:18 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:08.486 21:50:18 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:08.486 21:50:18 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:08.486 Looking for driver=vfio-pci 00:04:08.486 21:50:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.486 21:50:18 -- setup/driver.sh@45 -- # setup output config 00:04:08.486 21:50:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.486 21:50:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.776 21:50:22 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.776 21:50:22 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.776 21:50:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.736 21:50:24 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.736 21:50:24 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.736 21:50:24 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.996 21:50:24 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:13.996 21:50:24 -- setup/driver.sh@65 -- # setup reset 00:04:13.996 21:50:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.996 21:50:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.273 00:04:19.273 real 0m11.643s 00:04:19.273 user 0m2.981s 00:04:19.273 sys 0m5.979s 00:04:19.273 21:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.273 21:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:19.273 ************************************ 00:04:19.273 END TEST guess_driver 00:04:19.273 ************************************ 00:04:19.273 00:04:19.273 real 0m17.453s 00:04:19.273 user 0m4.714s 00:04:19.273 sys 0m9.304s 00:04:19.273 21:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.273 21:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:19.273 ************************************ 00:04:19.273 END TEST driver 00:04:19.273 ************************************ 00:04:19.274 21:50:30 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:19.274 21:50:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:19.274 21:50:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.274 21:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:19.274 ************************************ 00:04:19.274 START TEST devices 00:04:19.274 ************************************ 00:04:19.274 21:50:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:19.274 * Looking for test storage... 00:04:19.274 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:19.274 21:50:30 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:19.274 21:50:30 -- setup/devices.sh@192 -- # setup reset 00:04:19.274 21:50:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.274 21:50:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.552 21:50:34 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:24.552 21:50:34 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:24.552 21:50:34 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:24.552 21:50:34 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:24.552 21:50:34 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:24.552 21:50:34 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:24.552 21:50:34 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:24.552 21:50:34 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.552 21:50:34 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:24.552 21:50:34 -- setup/devices.sh@196 -- # blocks=() 00:04:24.552 21:50:34 -- setup/devices.sh@196 -- # declare -a blocks 00:04:24.552 21:50:34 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:24.552 21:50:34 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:24.552 21:50:34 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:24.552 21:50:34 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.552 21:50:34 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:24.552 21:50:34 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:24.552 21:50:34 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:24.552 21:50:34 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:24.552 21:50:34 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:24.552 21:50:34 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:24.552 21:50:34 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:24.552 No valid GPT data, bailing 00:04:24.552 21:50:34 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.552 21:50:34 -- scripts/common.sh@393 -- # pt= 00:04:24.552 21:50:34 -- scripts/common.sh@394 -- # return 1 00:04:24.552 21:50:34 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:24.552 21:50:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:24.552 21:50:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:24.552 21:50:34 -- setup/common.sh@80 -- # echo 2000398934016 00:04:24.552 21:50:34 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:24.552 21:50:34 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.552 21:50:34 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:24.552 21:50:34 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:24.552 21:50:34 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:24.552 21:50:34 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.552 21:50:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.552 21:50:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.552 21:50:34 -- common/autotest_common.sh@10 -- # set +x 00:04:24.552 ************************************ 00:04:24.552 START TEST nvme_mount 00:04:24.552 ************************************ 00:04:24.552 21:50:34 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:24.552 21:50:34 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:24.552 21:50:34 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:24.552 21:50:34 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.552 21:50:34 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.552 21:50:34 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:24.552 21:50:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.552 21:50:34 -- setup/common.sh@40 -- # local part_no=1 00:04:24.552 21:50:34 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.552 21:50:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.552 21:50:34 -- setup/common.sh@44 -- # parts=() 00:04:24.552 21:50:34 -- setup/common.sh@44 -- # local parts 00:04:24.552 21:50:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.552 21:50:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.552 21:50:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.552 21:50:34 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.552 21:50:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.552 21:50:34 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:24.552 21:50:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.553 21:50:34 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:24.812 Creating new GPT entries in memory. 00:04:24.812 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.812 other utilities. 00:04:24.812 21:50:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.812 21:50:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.812 21:50:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.812 21:50:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.812 21:50:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.751 Creating new GPT entries in memory. 00:04:25.751 The operation has completed successfully. 00:04:25.751 21:50:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:25.751 21:50:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.751 21:50:36 -- setup/common.sh@62 -- # wait 1978047 00:04:25.751 21:50:36 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 21:50:36 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:25.751 21:50:36 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 21:50:36 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:25.751 21:50:36 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:25.751 21:50:36 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 21:50:36 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.751 21:50:36 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:25.751 21:50:36 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:25.751 21:50:36 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.751 21:50:36 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.751 21:50:36 -- setup/devices.sh@53 -- # local found=0 00:04:25.751 21:50:36 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:25.751 21:50:36 -- setup/devices.sh@56 -- # : 00:04:25.751 21:50:36 -- setup/devices.sh@59 -- # local pci status 00:04:25.751 21:50:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.751 21:50:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:25.751 21:50:36 -- setup/devices.sh@47 -- # setup output config 00:04:25.751 21:50:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.751 21:50:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:29.948 21:50:40 -- setup/devices.sh@63 -- # found=1 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:29.948 21:50:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.948 21:50:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.948 21:50:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:29.948 21:50:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.948 21:50:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.948 21:50:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:29.948 21:50:41 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.948 21:50:41 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.948 21:50:41 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:29.948 21:50:41 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.948 21:50:41 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:29.948 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.948 21:50:41 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.948 21:50:41 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.208 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:30.208 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:30.208 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.208 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:30.208 21:50:41 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:30.208 21:50:41 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:30.208 21:50:41 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.208 21:50:41 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:30.208 21:50:41 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:30.208 21:50:41 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.208 21:50:41 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.208 21:50:41 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:30.208 21:50:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:30.208 21:50:41 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.208 21:50:41 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.208 21:50:41 -- setup/devices.sh@53 -- # local found=0 00:04:30.208 21:50:41 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.208 21:50:41 -- setup/devices.sh@56 -- # : 00:04:30.208 21:50:41 -- setup/devices.sh@59 -- # local pci status 00:04:30.208 21:50:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.208 21:50:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:30.208 21:50:41 -- setup/devices.sh@47 -- # setup output config 00:04:30.208 21:50:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.208 21:50:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:34.414 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.414 21:50:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.414 21:50:45 -- setup/devices.sh@63 -- # found=1 00:04:34.414 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.414 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.414 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.414 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.414 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.414 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.415 21:50:45 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.415 21:50:45 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.415 21:50:45 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.415 21:50:45 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.415 21:50:45 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.415 21:50:45 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:34.415 21:50:45 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:34.415 21:50:45 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.415 21:50:45 -- setup/devices.sh@50 -- # local mount_point= 00:04:34.415 21:50:45 -- setup/devices.sh@51 -- # local test_file= 00:04:34.415 21:50:45 -- setup/devices.sh@53 -- # local found=0 00:04:34.415 21:50:45 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.415 21:50:45 -- setup/devices.sh@59 -- # local pci status 00:04:34.415 21:50:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.415 21:50:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:34.415 21:50:45 -- setup/devices.sh@47 -- # setup output config 00:04:34.415 21:50:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.415 21:50:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:38.612 21:50:49 -- setup/devices.sh@63 -- # found=1 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.612 21:50:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.612 21:50:49 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.612 21:50:49 -- setup/devices.sh@68 -- # return 0 00:04:38.612 21:50:49 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:38.612 21:50:49 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.612 21:50:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.612 21:50:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.612 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.612 00:04:38.612 real 0m14.913s 00:04:38.612 user 0m4.491s 00:04:38.612 sys 0m8.390s 00:04:38.612 21:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.612 21:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:38.612 ************************************ 00:04:38.612 END TEST nvme_mount 00:04:38.612 ************************************ 00:04:38.612 21:50:49 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:38.612 21:50:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.612 21:50:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.612 21:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:38.612 ************************************ 00:04:38.612 START TEST dm_mount 00:04:38.612 ************************************ 00:04:38.612 21:50:49 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:38.612 21:50:49 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:38.612 21:50:49 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:38.612 21:50:49 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:38.612 21:50:49 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:38.612 21:50:49 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:38.612 21:50:49 -- setup/common.sh@40 -- # local part_no=2 00:04:38.612 21:50:49 -- setup/common.sh@41 -- # local size=1073741824 00:04:38.612 21:50:49 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:38.612 21:50:49 -- setup/common.sh@44 -- # parts=() 00:04:38.612 21:50:49 -- setup/common.sh@44 -- # local parts 00:04:38.612 21:50:49 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:38.612 21:50:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.612 21:50:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.612 21:50:49 -- setup/common.sh@46 -- # (( part++ )) 00:04:38.612 21:50:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.612 21:50:49 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:38.612 21:50:49 -- setup/common.sh@46 -- # (( part++ )) 00:04:38.612 21:50:49 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:38.612 21:50:49 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:38.612 21:50:49 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:38.612 21:50:49 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:39.551 Creating new GPT entries in memory. 00:04:39.551 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:39.551 other utilities. 00:04:39.551 21:50:50 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:39.551 21:50:50 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.551 21:50:50 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.551 21:50:50 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.551 21:50:50 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:40.969 Creating new GPT entries in memory. 00:04:40.969 The operation has completed successfully. 00:04:40.969 21:50:51 -- setup/common.sh@57 -- # (( part++ )) 00:04:40.969 21:50:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.969 21:50:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.969 21:50:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.969 21:50:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.907 The operation has completed successfully. 00:04:41.907 21:50:52 -- setup/common.sh@57 -- # (( part++ )) 00:04:41.907 21:50:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.907 21:50:52 -- setup/common.sh@62 -- # wait 1983345 00:04:41.907 21:50:52 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.907 21:50:52 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.907 21:50:52 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.907 21:50:52 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.907 21:50:52 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.907 21:50:52 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.907 21:50:52 -- setup/devices.sh@161 -- # break 00:04:41.907 21:50:52 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.907 21:50:52 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.907 21:50:52 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:41.907 21:50:52 -- setup/devices.sh@166 -- # dm=dm-2 00:04:41.907 21:50:52 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:41.907 21:50:52 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:41.907 21:50:52 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.907 21:50:52 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.907 21:50:52 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.907 21:50:52 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.907 21:50:52 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.907 21:50:52 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.907 21:50:52 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.907 21:50:52 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:41.907 21:50:52 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.907 21:50:52 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:41.907 21:50:52 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.907 21:50:52 -- setup/devices.sh@53 -- # local found=0 00:04:41.907 21:50:52 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.907 21:50:52 -- setup/devices.sh@56 -- # : 00:04:41.907 21:50:52 -- setup/devices.sh@59 -- # local pci status 00:04:41.907 21:50:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.907 21:50:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:41.907 21:50:52 -- setup/devices.sh@47 -- # setup output config 00:04:41.907 21:50:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.907 21:50:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:46.099 21:50:56 -- setup/devices.sh@63 -- # found=1 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.099 21:50:56 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:46.099 21:50:56 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:46.099 21:50:56 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.099 21:50:56 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:46.099 21:50:56 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:46.099 21:50:56 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:46.099 21:50:56 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:46.099 21:50:56 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:46.099 21:50:56 -- setup/devices.sh@50 -- # local mount_point= 00:04:46.099 21:50:56 -- setup/devices.sh@51 -- # local test_file= 00:04:46.099 21:50:56 -- setup/devices.sh@53 -- # local found=0 00:04:46.099 21:50:56 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.099 21:50:56 -- setup/devices.sh@59 -- # local pci status 00:04:46.099 21:50:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.099 21:50:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:46.099 21:50:56 -- setup/devices.sh@47 -- # setup output config 00:04:46.099 21:50:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.099 21:50:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:49.393 21:51:00 -- setup/devices.sh@63 -- # found=1 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.393 21:51:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.393 21:51:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:49.393 21:51:00 -- setup/devices.sh@68 -- # return 0 00:04:49.393 21:51:00 -- setup/devices.sh@187 -- # cleanup_dm 00:04:49.393 21:51:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:49.393 21:51:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.393 21:51:00 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:49.393 21:51:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:49.393 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:49.393 21:51:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:49.393 00:04:49.393 real 0m10.801s 00:04:49.393 user 0m2.481s 00:04:49.393 sys 0m5.246s 00:04:49.393 21:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.393 21:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:49.393 ************************************ 00:04:49.393 END TEST dm_mount 00:04:49.393 ************************************ 00:04:49.393 21:51:00 -- setup/devices.sh@1 -- # cleanup 00:04:49.393 21:51:00 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:49.393 21:51:00 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.393 21:51:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:49.393 21:51:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.393 21:51:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:49.651 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:49.651 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:49.651 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:49.651 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:49.651 21:51:00 -- setup/devices.sh@12 -- # cleanup_dm 00:04:49.651 21:51:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:49.651 21:51:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:49.651 21:51:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:49.651 21:51:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:49.651 21:51:00 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:49.651 21:51:00 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:49.651 00:04:49.651 real 0m30.546s 00:04:49.651 user 0m8.619s 00:04:49.651 sys 0m16.757s 00:04:49.651 21:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.651 21:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:49.651 ************************************ 00:04:49.651 END TEST devices 00:04:49.651 ************************************ 00:04:49.910 00:04:49.910 real 1m47.640s 00:04:49.910 user 0m32.269s 00:04:49.910 sys 1m2.509s 00:04:49.910 21:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.910 21:51:00 -- common/autotest_common.sh@10 -- # set +x 00:04:49.910 ************************************ 00:04:49.910 END TEST setup.sh 00:04:49.910 ************************************ 00:04:49.910 21:51:00 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:54.103 Hugepages 00:04:54.103 node hugesize free / total 00:04:54.103 node0 1048576kB 0 / 0 00:04:54.103 node0 2048kB 2048 / 2048 00:04:54.103 node1 1048576kB 0 / 0 00:04:54.103 node1 2048kB 0 / 0 00:04:54.103 00:04:54.103 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.103 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:54.103 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:54.103 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:54.103 21:51:04 -- spdk/autotest.sh@141 -- # uname -s 00:04:54.103 21:51:04 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:54.103 21:51:04 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:54.103 21:51:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:57.395 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.395 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:59.303 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:59.562 21:51:10 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:00.502 21:51:11 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:00.502 21:51:11 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:00.502 21:51:11 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:00.502 21:51:11 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:00.502 21:51:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:00.502 21:51:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:00.502 21:51:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:00.502 21:51:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:00.502 21:51:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:00.502 21:51:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:00.502 21:51:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:00.502 21:51:11 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.737 Waiting for block devices as requested 00:05:04.737 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:04.737 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:04.998 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:04.998 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:04.998 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:05.258 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:05.258 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:05.518 21:51:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:05.518 21:51:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:05:05.518 21:51:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:05.518 21:51:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:05.518 21:51:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:05.518 21:51:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:05.518 21:51:16 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:05.518 21:51:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:05.518 21:51:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:05.518 21:51:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:05.518 21:51:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:05.518 21:51:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:05.518 21:51:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:05.518 21:51:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:05.518 21:51:16 -- common/autotest_common.sh@1542 -- # continue 00:05:05.518 21:51:16 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:05.518 21:51:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:05.518 21:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.518 21:51:16 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:05.518 21:51:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:05.518 21:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:05.518 21:51:16 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:09.715 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:09.715 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:11.623 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:11.884 21:51:22 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:11.884 21:51:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:11.884 21:51:22 -- common/autotest_common.sh@10 -- # set +x 00:05:11.884 21:51:22 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:11.884 21:51:22 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:11.884 21:51:22 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:11.884 21:51:22 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:11.884 21:51:22 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:11.884 21:51:22 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:11.884 21:51:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:11.884 21:51:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:11.884 21:51:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:11.884 21:51:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:11.884 21:51:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:11.884 21:51:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:11.884 21:51:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:05:11.884 21:51:23 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:11.884 21:51:23 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:12.148 21:51:23 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:12.148 21:51:23 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:12.148 21:51:23 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:12.148 21:51:23 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:d8:00.0 00:05:12.148 21:51:23 -- common/autotest_common.sh@1577 -- # [[ -z 0000:d8:00.0 ]] 00:05:12.148 21:51:23 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1995096 00:05:12.148 21:51:23 -- common/autotest_common.sh@1583 -- # waitforlisten 1995096 00:05:12.148 21:51:23 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.148 21:51:23 -- common/autotest_common.sh@819 -- # '[' -z 1995096 ']' 00:05:12.148 21:51:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.148 21:51:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:12.148 21:51:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.148 21:51:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:12.148 21:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:12.148 [2024-07-26 21:51:23.149085] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:12.148 [2024-07-26 21:51:23.149138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995096 ] 00:05:12.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.148 [2024-07-26 21:51:23.235215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.148 [2024-07-26 21:51:23.273928] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:12.148 [2024-07-26 21:51:23.274041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.716 21:51:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.716 21:51:23 -- common/autotest_common.sh@852 -- # return 0 00:05:12.716 21:51:23 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:12.716 21:51:23 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:12.716 21:51:23 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:16.001 nvme0n1 00:05:16.001 21:51:26 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:16.001 [2024-07-26 21:51:27.054858] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:16.001 request: 00:05:16.001 { 00:05:16.001 "nvme_ctrlr_name": "nvme0", 00:05:16.001 "password": "test", 00:05:16.001 "method": "bdev_nvme_opal_revert", 00:05:16.001 "req_id": 1 00:05:16.001 } 00:05:16.001 Got JSON-RPC error response 00:05:16.001 response: 00:05:16.001 { 00:05:16.001 "code": -32602, 00:05:16.001 "message": "Invalid parameters" 00:05:16.001 } 00:05:16.001 21:51:27 -- common/autotest_common.sh@1589 -- # true 00:05:16.001 21:51:27 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:16.001 21:51:27 -- common/autotest_common.sh@1593 -- # killprocess 1995096 00:05:16.001 21:51:27 -- common/autotest_common.sh@926 -- # '[' -z 1995096 ']' 00:05:16.001 21:51:27 -- common/autotest_common.sh@930 -- # kill -0 1995096 00:05:16.001 21:51:27 -- common/autotest_common.sh@931 -- # uname 00:05:16.001 21:51:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:16.001 21:51:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1995096 00:05:16.001 21:51:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:16.001 21:51:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:16.001 21:51:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1995096' 00:05:16.001 killing process with pid 1995096 00:05:16.001 21:51:27 -- common/autotest_common.sh@945 -- # kill 1995096 00:05:16.001 21:51:27 -- common/autotest_common.sh@950 -- # wait 1995096 00:05:18.537 21:51:29 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:18.537 21:51:29 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:18.537 21:51:29 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:18.537 21:51:29 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:18.537 21:51:29 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:18.537 21:51:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:18.537 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.537 21:51:29 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:18.537 21:51:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.537 21:51:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.537 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.537 ************************************ 00:05:18.537 START TEST env 00:05:18.537 ************************************ 00:05:18.537 21:51:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:18.537 * Looking for test storage... 00:05:18.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:18.537 21:51:29 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.537 21:51:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.537 21:51:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.537 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.797 ************************************ 00:05:18.797 START TEST env_memory 00:05:18.797 ************************************ 00:05:18.797 21:51:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.797 00:05:18.797 00:05:18.797 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.797 http://cunit.sourceforge.net/ 00:05:18.797 00:05:18.797 00:05:18.797 Suite: memory 00:05:18.797 Test: alloc and free memory map ...[2024-07-26 21:51:29.811642] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.797 passed 00:05:18.797 Test: mem map translation ...[2024-07-26 21:51:29.830107] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.797 [2024-07-26 21:51:29.830124] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.797 [2024-07-26 21:51:29.830161] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.797 [2024-07-26 21:51:29.830169] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.797 passed 00:05:18.797 Test: mem map registration ...[2024-07-26 21:51:29.865550] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.797 [2024-07-26 21:51:29.865565] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.797 passed 00:05:18.797 Test: mem map adjacent registrations ...passed 00:05:18.797 00:05:18.797 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.797 suites 1 1 n/a 0 0 00:05:18.797 tests 4 4 4 0 0 00:05:18.797 asserts 152 152 152 0 n/a 00:05:18.797 00:05:18.797 Elapsed time = 0.131 seconds 00:05:18.797 00:05:18.797 real 0m0.144s 00:05:18.797 user 0m0.130s 00:05:18.797 sys 0m0.014s 00:05:18.797 21:51:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.797 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.797 ************************************ 00:05:18.797 END TEST env_memory 00:05:18.797 ************************************ 00:05:18.797 21:51:29 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.797 21:51:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.797 21:51:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.797 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:05:18.797 ************************************ 00:05:18.797 START TEST env_vtophys 00:05:18.797 ************************************ 00:05:18.797 21:51:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.797 EAL: lib.eal log level changed from notice to debug 00:05:18.797 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.797 EAL: Detected lcore 1 as core 1 on socket 0 00:05:18.797 EAL: Detected lcore 2 as core 2 on socket 0 00:05:18.797 EAL: Detected lcore 3 as core 3 on socket 0 00:05:18.797 EAL: Detected lcore 4 as core 4 on socket 0 00:05:18.797 EAL: Detected lcore 5 as core 5 on socket 0 00:05:18.797 EAL: Detected lcore 6 as core 6 on socket 0 00:05:18.797 EAL: Detected lcore 7 as core 8 on socket 0 00:05:18.797 EAL: Detected lcore 8 as core 9 on socket 0 00:05:18.797 EAL: Detected lcore 9 as core 10 on socket 0 00:05:18.797 EAL: Detected lcore 10 as core 11 on socket 0 00:05:18.797 EAL: Detected lcore 11 as core 12 on socket 0 00:05:18.797 EAL: Detected lcore 12 as core 13 on socket 0 00:05:18.797 EAL: Detected lcore 13 as core 14 on socket 0 00:05:18.797 EAL: Detected lcore 14 as core 16 on socket 0 00:05:18.797 EAL: Detected lcore 15 as core 17 on socket 0 00:05:18.797 EAL: Detected lcore 16 as core 18 on socket 0 00:05:18.797 EAL: Detected lcore 17 as core 19 on socket 0 00:05:18.797 EAL: Detected lcore 18 as core 20 on socket 0 00:05:18.797 EAL: Detected lcore 19 as core 21 on socket 0 00:05:18.797 EAL: Detected lcore 20 as core 22 on socket 0 00:05:18.797 EAL: Detected lcore 21 as core 24 on socket 0 00:05:18.797 EAL: Detected lcore 22 as core 25 on socket 0 00:05:18.797 EAL: Detected lcore 23 as core 26 on socket 0 00:05:18.797 EAL: Detected lcore 24 as core 27 on socket 0 00:05:18.797 EAL: Detected lcore 25 as core 28 on socket 0 00:05:18.797 EAL: Detected lcore 26 as core 29 on socket 0 00:05:18.797 EAL: Detected lcore 27 as core 30 on socket 0 00:05:18.797 EAL: Detected lcore 28 as core 0 on socket 1 00:05:18.797 EAL: Detected lcore 29 as core 1 on socket 1 00:05:18.797 EAL: Detected lcore 30 as core 2 on socket 1 00:05:18.797 EAL: Detected lcore 31 as core 3 on socket 1 00:05:18.797 EAL: Detected lcore 32 as core 4 on socket 1 00:05:18.797 EAL: Detected lcore 33 as core 5 on socket 1 00:05:18.797 EAL: Detected lcore 34 as core 6 on socket 1 00:05:18.797 EAL: Detected lcore 35 as core 8 on socket 1 00:05:18.797 EAL: Detected lcore 36 as core 9 on socket 1 00:05:18.797 EAL: Detected lcore 37 as core 10 on socket 1 00:05:18.797 EAL: Detected lcore 38 as core 11 on socket 1 00:05:18.797 EAL: Detected lcore 39 as core 12 on socket 1 00:05:18.797 EAL: Detected lcore 40 as core 13 on socket 1 00:05:18.797 EAL: Detected lcore 41 as core 14 on socket 1 00:05:18.797 EAL: Detected lcore 42 as core 16 on socket 1 00:05:18.797 EAL: Detected lcore 43 as core 17 on socket 1 00:05:18.797 EAL: Detected lcore 44 as core 18 on socket 1 00:05:18.797 EAL: Detected lcore 45 as core 19 on socket 1 00:05:18.797 EAL: Detected lcore 46 as core 20 on socket 1 00:05:18.797 EAL: Detected lcore 47 as core 21 on socket 1 00:05:18.797 EAL: Detected lcore 48 as core 22 on socket 1 00:05:18.797 EAL: Detected lcore 49 as core 24 on socket 1 00:05:18.797 EAL: Detected lcore 50 as core 25 on socket 1 00:05:18.797 EAL: Detected lcore 51 as core 26 on socket 1 00:05:18.797 EAL: Detected lcore 52 as core 27 on socket 1 00:05:18.797 EAL: Detected lcore 53 as core 28 on socket 1 00:05:18.797 EAL: Detected lcore 54 as core 29 on socket 1 00:05:18.797 EAL: Detected lcore 55 as core 30 on socket 1 00:05:18.797 EAL: Detected lcore 56 as core 0 on socket 0 00:05:18.797 EAL: Detected lcore 57 as core 1 on socket 0 00:05:18.797 EAL: Detected lcore 58 as core 2 on socket 0 00:05:18.797 EAL: Detected lcore 59 as core 3 on socket 0 00:05:18.797 EAL: Detected lcore 60 as core 4 on socket 0 00:05:18.797 EAL: Detected lcore 61 as core 5 on socket 0 00:05:18.797 EAL: Detected lcore 62 as core 6 on socket 0 00:05:18.797 EAL: Detected lcore 63 as core 8 on socket 0 00:05:18.797 EAL: Detected lcore 64 as core 9 on socket 0 00:05:18.797 EAL: Detected lcore 65 as core 10 on socket 0 00:05:18.798 EAL: Detected lcore 66 as core 11 on socket 0 00:05:18.798 EAL: Detected lcore 67 as core 12 on socket 0 00:05:18.798 EAL: Detected lcore 68 as core 13 on socket 0 00:05:18.798 EAL: Detected lcore 69 as core 14 on socket 0 00:05:18.798 EAL: Detected lcore 70 as core 16 on socket 0 00:05:18.798 EAL: Detected lcore 71 as core 17 on socket 0 00:05:18.798 EAL: Detected lcore 72 as core 18 on socket 0 00:05:18.798 EAL: Detected lcore 73 as core 19 on socket 0 00:05:18.798 EAL: Detected lcore 74 as core 20 on socket 0 00:05:18.798 EAL: Detected lcore 75 as core 21 on socket 0 00:05:18.798 EAL: Detected lcore 76 as core 22 on socket 0 00:05:18.798 EAL: Detected lcore 77 as core 24 on socket 0 00:05:18.798 EAL: Detected lcore 78 as core 25 on socket 0 00:05:18.798 EAL: Detected lcore 79 as core 26 on socket 0 00:05:18.798 EAL: Detected lcore 80 as core 27 on socket 0 00:05:18.798 EAL: Detected lcore 81 as core 28 on socket 0 00:05:18.798 EAL: Detected lcore 82 as core 29 on socket 0 00:05:18.798 EAL: Detected lcore 83 as core 30 on socket 0 00:05:18.798 EAL: Detected lcore 84 as core 0 on socket 1 00:05:18.798 EAL: Detected lcore 85 as core 1 on socket 1 00:05:18.798 EAL: Detected lcore 86 as core 2 on socket 1 00:05:18.798 EAL: Detected lcore 87 as core 3 on socket 1 00:05:18.798 EAL: Detected lcore 88 as core 4 on socket 1 00:05:18.798 EAL: Detected lcore 89 as core 5 on socket 1 00:05:18.798 EAL: Detected lcore 90 as core 6 on socket 1 00:05:18.798 EAL: Detected lcore 91 as core 8 on socket 1 00:05:18.798 EAL: Detected lcore 92 as core 9 on socket 1 00:05:18.798 EAL: Detected lcore 93 as core 10 on socket 1 00:05:18.798 EAL: Detected lcore 94 as core 11 on socket 1 00:05:18.798 EAL: Detected lcore 95 as core 12 on socket 1 00:05:18.798 EAL: Detected lcore 96 as core 13 on socket 1 00:05:18.798 EAL: Detected lcore 97 as core 14 on socket 1 00:05:18.798 EAL: Detected lcore 98 as core 16 on socket 1 00:05:18.798 EAL: Detected lcore 99 as core 17 on socket 1 00:05:18.798 EAL: Detected lcore 100 as core 18 on socket 1 00:05:18.798 EAL: Detected lcore 101 as core 19 on socket 1 00:05:18.798 EAL: Detected lcore 102 as core 20 on socket 1 00:05:18.798 EAL: Detected lcore 103 as core 21 on socket 1 00:05:18.798 EAL: Detected lcore 104 as core 22 on socket 1 00:05:18.798 EAL: Detected lcore 105 as core 24 on socket 1 00:05:18.798 EAL: Detected lcore 106 as core 25 on socket 1 00:05:18.798 EAL: Detected lcore 107 as core 26 on socket 1 00:05:18.798 EAL: Detected lcore 108 as core 27 on socket 1 00:05:18.798 EAL: Detected lcore 109 as core 28 on socket 1 00:05:18.798 EAL: Detected lcore 110 as core 29 on socket 1 00:05:18.798 EAL: Detected lcore 111 as core 30 on socket 1 00:05:18.798 EAL: Maximum logical cores by configuration: 128 00:05:18.798 EAL: Detected CPU lcores: 112 00:05:18.798 EAL: Detected NUMA nodes: 2 00:05:18.798 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:18.798 EAL: Detected shared linkage of DPDK 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:18.798 EAL: Registered [vdev] bus. 00:05:18.798 EAL: bus.vdev log level changed from disabled to notice 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:18.798 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:18.798 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:18.798 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:18.798 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Bus pci wants IOVA as 'DC' 00:05:19.058 EAL: Bus vdev wants IOVA as 'DC' 00:05:19.058 EAL: Buses did not request a specific IOVA mode. 00:05:19.058 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.058 EAL: Selected IOVA mode 'VA' 00:05:19.058 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.058 EAL: Probing VFIO support... 00:05:19.058 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.058 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.058 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.058 EAL: VFIO support initialized 00:05:19.058 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.058 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.058 EAL: Setting up physically contiguous memory... 00:05:19.058 EAL: Setting maximum number of open files to 524288 00:05:19.058 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.058 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.058 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.058 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.058 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.058 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.058 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.058 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.058 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.058 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.058 EAL: Hugepages will be freed exactly as allocated. 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: TSC frequency is ~2500000 KHz 00:05:19.058 EAL: Main lcore 0 is ready (tid=7f50df048a00;cpuset=[0]) 00:05:19.058 EAL: Trying to obtain current memory policy. 00:05:19.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.058 EAL: Restoring previous memory policy: 0 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.058 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:05:19.058 EAL: probe driver: 8086:37d2 net_i40e 00:05:19.058 EAL: Not managed by a supported kernel driver, skipped 00:05:19.058 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:05:19.058 EAL: probe driver: 8086:37d2 net_i40e 00:05:19.058 EAL: Not managed by a supported kernel driver, skipped 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.058 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.058 00:05:19.058 00:05:19.058 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.058 http://cunit.sourceforge.net/ 00:05:19.058 00:05:19.058 00:05:19.058 Suite: components_suite 00:05:19.058 Test: vtophys_malloc_test ...passed 00:05:19.058 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.058 EAL: Restoring previous memory policy: 4 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.058 EAL: Trying to obtain current memory policy. 00:05:19.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.058 EAL: Restoring previous memory policy: 4 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.058 EAL: Trying to obtain current memory policy. 00:05:19.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.058 EAL: Restoring previous memory policy: 4 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.058 EAL: Trying to obtain current memory policy. 00:05:19.058 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.058 EAL: Restoring previous memory policy: 4 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.058 EAL: request: mp_malloc_sync 00:05:19.058 EAL: No shared files mode enabled, IPC is disabled 00:05:19.058 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.058 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.059 EAL: Trying to obtain current memory policy. 00:05:19.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.059 EAL: Restoring previous memory policy: 4 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.059 EAL: Trying to obtain current memory policy. 00:05:19.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.059 EAL: Restoring previous memory policy: 4 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.059 EAL: Trying to obtain current memory policy. 00:05:19.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.059 EAL: Restoring previous memory policy: 4 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.059 EAL: Trying to obtain current memory policy. 00:05:19.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.059 EAL: Restoring previous memory policy: 4 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.059 EAL: request: mp_malloc_sync 00:05:19.059 EAL: No shared files mode enabled, IPC is disabled 00:05:19.059 EAL: Heap on socket 0 was expanded by 258MB 00:05:19.059 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.318 EAL: request: mp_malloc_sync 00:05:19.318 EAL: No shared files mode enabled, IPC is disabled 00:05:19.318 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.318 EAL: Trying to obtain current memory policy. 00:05:19.318 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.318 EAL: Restoring previous memory policy: 4 00:05:19.318 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.318 EAL: request: mp_malloc_sync 00:05:19.318 EAL: No shared files mode enabled, IPC is disabled 00:05:19.318 EAL: Heap on socket 0 was expanded by 514MB 00:05:19.318 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.577 EAL: request: mp_malloc_sync 00:05:19.577 EAL: No shared files mode enabled, IPC is disabled 00:05:19.577 EAL: Heap on socket 0 was shrunk by 514MB 00:05:19.577 EAL: Trying to obtain current memory policy. 00:05:19.577 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.577 EAL: Restoring previous memory policy: 4 00:05:19.578 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.578 EAL: request: mp_malloc_sync 00:05:19.578 EAL: No shared files mode enabled, IPC is disabled 00:05:19.578 EAL: Heap on socket 0 was expanded by 1026MB 00:05:19.837 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.097 EAL: request: mp_malloc_sync 00:05:20.097 EAL: No shared files mode enabled, IPC is disabled 00:05:20.097 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:20.097 passed 00:05:20.097 00:05:20.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.097 suites 1 1 n/a 0 0 00:05:20.097 tests 2 2 2 0 0 00:05:20.097 asserts 497 497 497 0 n/a 00:05:20.097 00:05:20.097 Elapsed time = 0.967 seconds 00:05:20.097 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.097 EAL: request: mp_malloc_sync 00:05:20.097 EAL: No shared files mode enabled, IPC is disabled 00:05:20.097 EAL: Heap on socket 0 was shrunk by 2MB 00:05:20.097 EAL: No shared files mode enabled, IPC is disabled 00:05:20.097 EAL: No shared files mode enabled, IPC is disabled 00:05:20.097 EAL: No shared files mode enabled, IPC is disabled 00:05:20.097 00:05:20.097 real 0m1.119s 00:05:20.097 user 0m0.647s 00:05:20.097 sys 0m0.437s 00:05:20.097 21:51:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.097 21:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:20.097 ************************************ 00:05:20.097 END TEST env_vtophys 00:05:20.097 ************************************ 00:05:20.097 21:51:31 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.097 21:51:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.097 21:51:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.097 21:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:20.097 ************************************ 00:05:20.097 START TEST env_pci 00:05:20.097 ************************************ 00:05:20.097 21:51:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:20.097 00:05:20.097 00:05:20.097 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.097 http://cunit.sourceforge.net/ 00:05:20.097 00:05:20.097 00:05:20.097 Suite: pci 00:05:20.097 Test: pci_hook ...[2024-07-26 21:51:31.148345] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1996655 has claimed it 00:05:20.097 EAL: Cannot find device (10000:00:01.0) 00:05:20.097 EAL: Failed to attach device on primary process 00:05:20.097 passed 00:05:20.097 00:05:20.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.097 suites 1 1 n/a 0 0 00:05:20.097 tests 1 1 1 0 0 00:05:20.097 asserts 25 25 25 0 n/a 00:05:20.097 00:05:20.097 Elapsed time = 0.043 seconds 00:05:20.097 00:05:20.097 real 0m0.065s 00:05:20.097 user 0m0.013s 00:05:20.097 sys 0m0.052s 00:05:20.097 21:51:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.097 21:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:20.097 ************************************ 00:05:20.097 END TEST env_pci 00:05:20.097 ************************************ 00:05:20.097 21:51:31 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:20.097 21:51:31 -- env/env.sh@15 -- # uname 00:05:20.097 21:51:31 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:20.097 21:51:31 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:20.097 21:51:31 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.097 21:51:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:20.097 21:51:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.097 21:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:20.097 ************************************ 00:05:20.097 START TEST env_dpdk_post_init 00:05:20.097 ************************************ 00:05:20.097 21:51:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:20.097 EAL: Detected CPU lcores: 112 00:05:20.097 EAL: Detected NUMA nodes: 2 00:05:20.097 EAL: Detected shared linkage of DPDK 00:05:20.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.097 EAL: Selected IOVA mode 'VA' 00:05:20.097 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.097 EAL: VFIO support initialized 00:05:20.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.357 EAL: Using IOMMU type 1 (Type 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:20.357 EAL: Ignore mapping IO port bar(1) 00:05:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:20.616 EAL: Ignore mapping IO port bar(1) 00:05:20.616 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:21.184 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:25.379 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:25.379 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:25.379 Starting DPDK initialization... 00:05:25.379 Starting SPDK post initialization... 00:05:25.379 SPDK NVMe probe 00:05:25.379 Attaching to 0000:d8:00.0 00:05:25.379 Attached to 0000:d8:00.0 00:05:25.379 Cleaning up... 00:05:25.379 00:05:25.379 real 0m5.345s 00:05:25.379 user 0m4.008s 00:05:25.379 sys 0m0.402s 00:05:25.379 21:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.379 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.379 ************************************ 00:05:25.379 END TEST env_dpdk_post_init 00:05:25.379 ************************************ 00:05:25.638 21:51:36 -- env/env.sh@26 -- # uname 00:05:25.638 21:51:36 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.638 21:51:36 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.638 21:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.638 21:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.638 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 ************************************ 00:05:25.638 START TEST env_mem_callbacks 00:05:25.638 ************************************ 00:05:25.638 21:51:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.638 EAL: Detected CPU lcores: 112 00:05:25.638 EAL: Detected NUMA nodes: 2 00:05:25.638 EAL: Detected shared linkage of DPDK 00:05:25.638 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.638 EAL: Selected IOVA mode 'VA' 00:05:25.638 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.638 EAL: VFIO support initialized 00:05:25.638 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.638 00:05:25.638 00:05:25.638 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.638 http://cunit.sourceforge.net/ 00:05:25.638 00:05:25.638 00:05:25.638 Suite: memory 00:05:25.638 Test: test ... 00:05:25.638 register 0x200000200000 2097152 00:05:25.638 malloc 3145728 00:05:25.638 register 0x200000400000 4194304 00:05:25.638 buf 0x200000500000 len 3145728 PASSED 00:05:25.638 malloc 64 00:05:25.638 buf 0x2000004fff40 len 64 PASSED 00:05:25.638 malloc 4194304 00:05:25.638 register 0x200000800000 6291456 00:05:25.638 buf 0x200000a00000 len 4194304 PASSED 00:05:25.638 free 0x200000500000 3145728 00:05:25.638 free 0x2000004fff40 64 00:05:25.638 unregister 0x200000400000 4194304 PASSED 00:05:25.638 free 0x200000a00000 4194304 00:05:25.638 unregister 0x200000800000 6291456 PASSED 00:05:25.638 malloc 8388608 00:05:25.638 register 0x200000400000 10485760 00:05:25.638 buf 0x200000600000 len 8388608 PASSED 00:05:25.638 free 0x200000600000 8388608 00:05:25.638 unregister 0x200000400000 10485760 PASSED 00:05:25.638 passed 00:05:25.638 00:05:25.638 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.638 suites 1 1 n/a 0 0 00:05:25.638 tests 1 1 1 0 0 00:05:25.638 asserts 15 15 15 0 n/a 00:05:25.638 00:05:25.638 Elapsed time = 0.005 seconds 00:05:25.638 00:05:25.638 real 0m0.072s 00:05:25.638 user 0m0.021s 00:05:25.638 sys 0m0.051s 00:05:25.638 21:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.638 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 ************************************ 00:05:25.638 END TEST env_mem_callbacks 00:05:25.638 ************************************ 00:05:25.638 00:05:25.638 real 0m7.106s 00:05:25.638 user 0m4.957s 00:05:25.638 sys 0m1.229s 00:05:25.638 21:51:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.638 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 ************************************ 00:05:25.638 END TEST env 00:05:25.638 ************************************ 00:05:25.638 21:51:36 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.638 21:51:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.638 21:51:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.638 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 ************************************ 00:05:25.638 START TEST rpc 00:05:25.638 ************************************ 00:05:25.638 21:51:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.898 * Looking for test storage... 00:05:25.898 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:25.898 21:51:36 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.898 21:51:36 -- rpc/rpc.sh@65 -- # spdk_pid=1997720 00:05:25.898 21:51:36 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.898 21:51:36 -- rpc/rpc.sh@67 -- # waitforlisten 1997720 00:05:25.898 21:51:36 -- common/autotest_common.sh@819 -- # '[' -z 1997720 ']' 00:05:25.898 21:51:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.898 21:51:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.898 21:51:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.898 21:51:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.898 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:25.898 [2024-07-26 21:51:36.949612] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:25.898 [2024-07-26 21:51:36.949672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997720 ] 00:05:25.898 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.898 [2024-07-26 21:51:37.036174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.898 [2024-07-26 21:51:37.074048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.898 [2024-07-26 21:51:37.074153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:25.898 [2024-07-26 21:51:37.074164] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1997720' to capture a snapshot of events at runtime. 00:05:25.898 [2024-07-26 21:51:37.074173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1997720 for offline analysis/debug. 00:05:25.898 [2024-07-26 21:51:37.074195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.865 21:51:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.865 21:51:37 -- common/autotest_common.sh@852 -- # return 0 00:05:26.865 21:51:37 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:26.865 21:51:37 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:26.865 21:51:37 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.865 21:51:37 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.865 21:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.865 21:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.865 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 ************************************ 00:05:26.865 START TEST rpc_integrity 00:05:26.865 ************************************ 00:05:26.865 21:51:37 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:26.865 21:51:37 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.865 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.865 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.865 21:51:37 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.865 21:51:37 -- rpc/rpc.sh@13 -- # jq length 00:05:26.865 21:51:37 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.865 21:51:37 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.865 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.865 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.865 21:51:37 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.865 21:51:37 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.865 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.865 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.865 21:51:37 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.865 { 00:05:26.865 "name": "Malloc0", 00:05:26.865 "aliases": [ 00:05:26.865 "b289ca43-d569-4610-89d9-529343b81bf9" 00:05:26.865 ], 00:05:26.865 "product_name": "Malloc disk", 00:05:26.865 "block_size": 512, 00:05:26.865 "num_blocks": 16384, 00:05:26.865 "uuid": "b289ca43-d569-4610-89d9-529343b81bf9", 00:05:26.865 "assigned_rate_limits": { 00:05:26.865 "rw_ios_per_sec": 0, 00:05:26.865 "rw_mbytes_per_sec": 0, 00:05:26.865 "r_mbytes_per_sec": 0, 00:05:26.865 "w_mbytes_per_sec": 0 00:05:26.865 }, 00:05:26.865 "claimed": false, 00:05:26.865 "zoned": false, 00:05:26.865 "supported_io_types": { 00:05:26.865 "read": true, 00:05:26.865 "write": true, 00:05:26.865 "unmap": true, 00:05:26.865 "write_zeroes": true, 00:05:26.865 "flush": true, 00:05:26.865 "reset": true, 00:05:26.865 "compare": false, 00:05:26.865 "compare_and_write": false, 00:05:26.865 "abort": true, 00:05:26.865 "nvme_admin": false, 00:05:26.865 "nvme_io": false 00:05:26.865 }, 00:05:26.865 "memory_domains": [ 00:05:26.865 { 00:05:26.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.865 "dma_device_type": 2 00:05:26.865 } 00:05:26.865 ], 00:05:26.865 "driver_specific": {} 00:05:26.865 } 00:05:26.865 ]' 00:05:26.865 21:51:37 -- rpc/rpc.sh@17 -- # jq length 00:05:26.865 21:51:37 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.865 21:51:37 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.865 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.865 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 [2024-07-26 21:51:37.869043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.865 [2024-07-26 21:51:37.869075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.865 [2024-07-26 21:51:37.869088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2138e30 00:05:26.865 [2024-07-26 21:51:37.869100] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.865 [2024-07-26 21:51:37.870098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.865 [2024-07-26 21:51:37.870119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.865 Passthru0 00:05:26.865 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.865 21:51:37 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.865 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.865 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.865 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.865 21:51:37 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.865 { 00:05:26.865 "name": "Malloc0", 00:05:26.865 "aliases": [ 00:05:26.866 "b289ca43-d569-4610-89d9-529343b81bf9" 00:05:26.866 ], 00:05:26.866 "product_name": "Malloc disk", 00:05:26.866 "block_size": 512, 00:05:26.866 "num_blocks": 16384, 00:05:26.866 "uuid": "b289ca43-d569-4610-89d9-529343b81bf9", 00:05:26.866 "assigned_rate_limits": { 00:05:26.866 "rw_ios_per_sec": 0, 00:05:26.866 "rw_mbytes_per_sec": 0, 00:05:26.866 "r_mbytes_per_sec": 0, 00:05:26.866 "w_mbytes_per_sec": 0 00:05:26.866 }, 00:05:26.866 "claimed": true, 00:05:26.866 "claim_type": "exclusive_write", 00:05:26.866 "zoned": false, 00:05:26.866 "supported_io_types": { 00:05:26.866 "read": true, 00:05:26.866 "write": true, 00:05:26.866 "unmap": true, 00:05:26.866 "write_zeroes": true, 00:05:26.866 "flush": true, 00:05:26.866 "reset": true, 00:05:26.866 "compare": false, 00:05:26.866 "compare_and_write": false, 00:05:26.866 "abort": true, 00:05:26.866 "nvme_admin": false, 00:05:26.866 "nvme_io": false 00:05:26.866 }, 00:05:26.866 "memory_domains": [ 00:05:26.866 { 00:05:26.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.866 "dma_device_type": 2 00:05:26.866 } 00:05:26.866 ], 00:05:26.866 "driver_specific": {} 00:05:26.866 }, 00:05:26.866 { 00:05:26.866 "name": "Passthru0", 00:05:26.866 "aliases": [ 00:05:26.866 "1af817a8-2411-5f55-b2fa-4834bdfa5036" 00:05:26.866 ], 00:05:26.866 "product_name": "passthru", 00:05:26.866 "block_size": 512, 00:05:26.866 "num_blocks": 16384, 00:05:26.866 "uuid": "1af817a8-2411-5f55-b2fa-4834bdfa5036", 00:05:26.866 "assigned_rate_limits": { 00:05:26.866 "rw_ios_per_sec": 0, 00:05:26.866 "rw_mbytes_per_sec": 0, 00:05:26.866 "r_mbytes_per_sec": 0, 00:05:26.866 "w_mbytes_per_sec": 0 00:05:26.866 }, 00:05:26.866 "claimed": false, 00:05:26.866 "zoned": false, 00:05:26.866 "supported_io_types": { 00:05:26.866 "read": true, 00:05:26.866 "write": true, 00:05:26.866 "unmap": true, 00:05:26.866 "write_zeroes": true, 00:05:26.866 "flush": true, 00:05:26.866 "reset": true, 00:05:26.866 "compare": false, 00:05:26.866 "compare_and_write": false, 00:05:26.866 "abort": true, 00:05:26.866 "nvme_admin": false, 00:05:26.866 "nvme_io": false 00:05:26.866 }, 00:05:26.866 "memory_domains": [ 00:05:26.866 { 00:05:26.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.866 "dma_device_type": 2 00:05:26.866 } 00:05:26.866 ], 00:05:26.866 "driver_specific": { 00:05:26.866 "passthru": { 00:05:26.866 "name": "Passthru0", 00:05:26.866 "base_bdev_name": "Malloc0" 00:05:26.866 } 00:05:26.866 } 00:05:26.866 } 00:05:26.866 ]' 00:05:26.866 21:51:37 -- rpc/rpc.sh@21 -- # jq length 00:05:26.866 21:51:37 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.866 21:51:37 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.866 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.866 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.866 21:51:37 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:26.866 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.866 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.866 21:51:37 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:26.866 21:51:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.866 21:51:37 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 21:51:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.866 21:51:37 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:26.866 21:51:37 -- rpc/rpc.sh@26 -- # jq length 00:05:26.866 21:51:37 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:26.866 00:05:26.866 real 0m0.250s 00:05:26.866 user 0m0.138s 00:05:26.866 sys 0m0.047s 00:05:26.866 21:51:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.866 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 ************************************ 00:05:26.866 END TEST rpc_integrity 00:05:26.866 ************************************ 00:05:26.866 21:51:38 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:26.866 21:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:26.866 21:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.866 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 ************************************ 00:05:26.866 START TEST rpc_plugins 00:05:26.866 ************************************ 00:05:26.866 21:51:38 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:26.866 21:51:38 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:26.866 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.866 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.866 21:51:38 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:26.866 21:51:38 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:26.866 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:26.866 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:26.866 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:26.866 21:51:38 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:26.866 { 00:05:26.866 "name": "Malloc1", 00:05:26.866 "aliases": [ 00:05:26.866 "4eef2dfe-2462-4055-8f85-6d73f4b4073d" 00:05:26.866 ], 00:05:26.866 "product_name": "Malloc disk", 00:05:26.866 "block_size": 4096, 00:05:26.866 "num_blocks": 256, 00:05:26.866 "uuid": "4eef2dfe-2462-4055-8f85-6d73f4b4073d", 00:05:26.866 "assigned_rate_limits": { 00:05:26.866 "rw_ios_per_sec": 0, 00:05:26.866 "rw_mbytes_per_sec": 0, 00:05:26.866 "r_mbytes_per_sec": 0, 00:05:26.866 "w_mbytes_per_sec": 0 00:05:26.866 }, 00:05:26.866 "claimed": false, 00:05:26.866 "zoned": false, 00:05:26.866 "supported_io_types": { 00:05:26.866 "read": true, 00:05:26.866 "write": true, 00:05:26.866 "unmap": true, 00:05:26.866 "write_zeroes": true, 00:05:26.866 "flush": true, 00:05:26.866 "reset": true, 00:05:26.866 "compare": false, 00:05:26.866 "compare_and_write": false, 00:05:26.866 "abort": true, 00:05:26.866 "nvme_admin": false, 00:05:26.866 "nvme_io": false 00:05:26.866 }, 00:05:26.866 "memory_domains": [ 00:05:26.866 { 00:05:26.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.866 "dma_device_type": 2 00:05:26.866 } 00:05:26.866 ], 00:05:26.866 "driver_specific": {} 00:05:26.866 } 00:05:26.866 ]' 00:05:26.866 21:51:38 -- rpc/rpc.sh@32 -- # jq length 00:05:27.126 21:51:38 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:27.126 21:51:38 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:27.126 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.126 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.126 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.126 21:51:38 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:27.126 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.126 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.126 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.126 21:51:38 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:27.126 21:51:38 -- rpc/rpc.sh@36 -- # jq length 00:05:27.126 21:51:38 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:27.126 00:05:27.126 real 0m0.118s 00:05:27.126 user 0m0.066s 00:05:27.126 sys 0m0.021s 00:05:27.126 21:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.126 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.126 ************************************ 00:05:27.126 END TEST rpc_plugins 00:05:27.126 ************************************ 00:05:27.126 21:51:38 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:27.126 21:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.126 21:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.126 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.126 ************************************ 00:05:27.126 START TEST rpc_trace_cmd_test 00:05:27.126 ************************************ 00:05:27.126 21:51:38 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:27.126 21:51:38 -- rpc/rpc.sh@40 -- # local info 00:05:27.126 21:51:38 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:27.126 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.126 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.126 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.126 21:51:38 -- rpc/rpc.sh@42 -- # info='{ 00:05:27.126 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1997720", 00:05:27.126 "tpoint_group_mask": "0x8", 00:05:27.126 "iscsi_conn": { 00:05:27.126 "mask": "0x2", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "scsi": { 00:05:27.126 "mask": "0x4", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "bdev": { 00:05:27.126 "mask": "0x8", 00:05:27.126 "tpoint_mask": "0xffffffffffffffff" 00:05:27.126 }, 00:05:27.126 "nvmf_rdma": { 00:05:27.126 "mask": "0x10", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "nvmf_tcp": { 00:05:27.126 "mask": "0x20", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "ftl": { 00:05:27.126 "mask": "0x40", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "blobfs": { 00:05:27.126 "mask": "0x80", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "dsa": { 00:05:27.126 "mask": "0x200", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "thread": { 00:05:27.126 "mask": "0x400", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "nvme_pcie": { 00:05:27.126 "mask": "0x800", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "iaa": { 00:05:27.126 "mask": "0x1000", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "nvme_tcp": { 00:05:27.126 "mask": "0x2000", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 }, 00:05:27.126 "bdev_nvme": { 00:05:27.126 "mask": "0x4000", 00:05:27.126 "tpoint_mask": "0x0" 00:05:27.126 } 00:05:27.126 }' 00:05:27.126 21:51:38 -- rpc/rpc.sh@43 -- # jq length 00:05:27.126 21:51:38 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:27.126 21:51:38 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:27.126 21:51:38 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:27.126 21:51:38 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:27.386 21:51:38 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:27.386 21:51:38 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.386 21:51:38 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.386 21:51:38 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.386 21:51:38 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.386 00:05:27.386 real 0m0.219s 00:05:27.386 user 0m0.176s 00:05:27.386 sys 0m0.034s 00:05:27.386 21:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.386 ************************************ 00:05:27.386 END TEST rpc_trace_cmd_test 00:05:27.386 ************************************ 00:05:27.386 21:51:38 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.386 21:51:38 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.386 21:51:38 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.386 21:51:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:27.386 21:51:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.386 ************************************ 00:05:27.386 START TEST rpc_daemon_integrity 00:05:27.386 ************************************ 00:05:27.386 21:51:38 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:27.386 21:51:38 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.386 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.386 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.386 21:51:38 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.386 21:51:38 -- rpc/rpc.sh@13 -- # jq length 00:05:27.386 21:51:38 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.386 21:51:38 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.386 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.386 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.386 21:51:38 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.386 21:51:38 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.386 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.386 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.386 21:51:38 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.386 { 00:05:27.386 "name": "Malloc2", 00:05:27.386 "aliases": [ 00:05:27.386 "14d069b4-50c9-4aa9-ad21-e4f0d4efa73f" 00:05:27.386 ], 00:05:27.386 "product_name": "Malloc disk", 00:05:27.386 "block_size": 512, 00:05:27.386 "num_blocks": 16384, 00:05:27.386 "uuid": "14d069b4-50c9-4aa9-ad21-e4f0d4efa73f", 00:05:27.386 "assigned_rate_limits": { 00:05:27.386 "rw_ios_per_sec": 0, 00:05:27.386 "rw_mbytes_per_sec": 0, 00:05:27.386 "r_mbytes_per_sec": 0, 00:05:27.386 "w_mbytes_per_sec": 0 00:05:27.386 }, 00:05:27.386 "claimed": false, 00:05:27.386 "zoned": false, 00:05:27.386 "supported_io_types": { 00:05:27.386 "read": true, 00:05:27.386 "write": true, 00:05:27.386 "unmap": true, 00:05:27.386 "write_zeroes": true, 00:05:27.386 "flush": true, 00:05:27.386 "reset": true, 00:05:27.386 "compare": false, 00:05:27.386 "compare_and_write": false, 00:05:27.386 "abort": true, 00:05:27.386 "nvme_admin": false, 00:05:27.386 "nvme_io": false 00:05:27.386 }, 00:05:27.386 "memory_domains": [ 00:05:27.386 { 00:05:27.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.386 "dma_device_type": 2 00:05:27.386 } 00:05:27.386 ], 00:05:27.386 "driver_specific": {} 00:05:27.386 } 00:05:27.386 ]' 00:05:27.386 21:51:38 -- rpc/rpc.sh@17 -- # jq length 00:05:27.386 21:51:38 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.386 21:51:38 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.386 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.386 [2024-07-26 21:51:38.582953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.386 [2024-07-26 21:51:38.582983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.386 [2024-07-26 21:51:38.582999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x213a6b0 00:05:27.386 [2024-07-26 21:51:38.583008] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.386 [2024-07-26 21:51:38.583899] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.386 [2024-07-26 21:51:38.583921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.386 Passthru0 00:05:27.386 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.386 21:51:38 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.386 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.386 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.646 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.646 21:51:38 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.646 { 00:05:27.646 "name": "Malloc2", 00:05:27.646 "aliases": [ 00:05:27.646 "14d069b4-50c9-4aa9-ad21-e4f0d4efa73f" 00:05:27.646 ], 00:05:27.646 "product_name": "Malloc disk", 00:05:27.646 "block_size": 512, 00:05:27.646 "num_blocks": 16384, 00:05:27.646 "uuid": "14d069b4-50c9-4aa9-ad21-e4f0d4efa73f", 00:05:27.646 "assigned_rate_limits": { 00:05:27.646 "rw_ios_per_sec": 0, 00:05:27.646 "rw_mbytes_per_sec": 0, 00:05:27.646 "r_mbytes_per_sec": 0, 00:05:27.646 "w_mbytes_per_sec": 0 00:05:27.646 }, 00:05:27.646 "claimed": true, 00:05:27.646 "claim_type": "exclusive_write", 00:05:27.646 "zoned": false, 00:05:27.646 "supported_io_types": { 00:05:27.646 "read": true, 00:05:27.646 "write": true, 00:05:27.646 "unmap": true, 00:05:27.646 "write_zeroes": true, 00:05:27.646 "flush": true, 00:05:27.646 "reset": true, 00:05:27.646 "compare": false, 00:05:27.646 "compare_and_write": false, 00:05:27.646 "abort": true, 00:05:27.646 "nvme_admin": false, 00:05:27.646 "nvme_io": false 00:05:27.646 }, 00:05:27.646 "memory_domains": [ 00:05:27.646 { 00:05:27.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.646 "dma_device_type": 2 00:05:27.646 } 00:05:27.646 ], 00:05:27.646 "driver_specific": {} 00:05:27.646 }, 00:05:27.646 { 00:05:27.646 "name": "Passthru0", 00:05:27.646 "aliases": [ 00:05:27.646 "412b800d-08cc-561c-8e6c-cfabf40c98b8" 00:05:27.646 ], 00:05:27.646 "product_name": "passthru", 00:05:27.646 "block_size": 512, 00:05:27.646 "num_blocks": 16384, 00:05:27.646 "uuid": "412b800d-08cc-561c-8e6c-cfabf40c98b8", 00:05:27.646 "assigned_rate_limits": { 00:05:27.646 "rw_ios_per_sec": 0, 00:05:27.646 "rw_mbytes_per_sec": 0, 00:05:27.646 "r_mbytes_per_sec": 0, 00:05:27.646 "w_mbytes_per_sec": 0 00:05:27.646 }, 00:05:27.646 "claimed": false, 00:05:27.646 "zoned": false, 00:05:27.646 "supported_io_types": { 00:05:27.646 "read": true, 00:05:27.646 "write": true, 00:05:27.646 "unmap": true, 00:05:27.646 "write_zeroes": true, 00:05:27.646 "flush": true, 00:05:27.646 "reset": true, 00:05:27.646 "compare": false, 00:05:27.646 "compare_and_write": false, 00:05:27.646 "abort": true, 00:05:27.646 "nvme_admin": false, 00:05:27.646 "nvme_io": false 00:05:27.646 }, 00:05:27.646 "memory_domains": [ 00:05:27.646 { 00:05:27.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.646 "dma_device_type": 2 00:05:27.646 } 00:05:27.646 ], 00:05:27.646 "driver_specific": { 00:05:27.646 "passthru": { 00:05:27.646 "name": "Passthru0", 00:05:27.646 "base_bdev_name": "Malloc2" 00:05:27.646 } 00:05:27.646 } 00:05:27.646 } 00:05:27.646 ]' 00:05:27.646 21:51:38 -- rpc/rpc.sh@21 -- # jq length 00:05:27.646 21:51:38 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.646 21:51:38 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.646 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.646 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.646 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.646 21:51:38 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.646 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.646 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.646 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.646 21:51:38 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.646 21:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:27.646 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.646 21:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:27.646 21:51:38 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.646 21:51:38 -- rpc/rpc.sh@26 -- # jq length 00:05:27.646 21:51:38 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.646 00:05:27.646 real 0m0.267s 00:05:27.646 user 0m0.155s 00:05:27.646 sys 0m0.050s 00:05:27.646 21:51:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.646 21:51:38 -- common/autotest_common.sh@10 -- # set +x 00:05:27.646 ************************************ 00:05:27.646 END TEST rpc_daemon_integrity 00:05:27.646 ************************************ 00:05:27.646 21:51:38 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.646 21:51:38 -- rpc/rpc.sh@84 -- # killprocess 1997720 00:05:27.646 21:51:38 -- common/autotest_common.sh@926 -- # '[' -z 1997720 ']' 00:05:27.646 21:51:38 -- common/autotest_common.sh@930 -- # kill -0 1997720 00:05:27.646 21:51:38 -- common/autotest_common.sh@931 -- # uname 00:05:27.646 21:51:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.646 21:51:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1997720 00:05:27.646 21:51:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.646 21:51:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.646 21:51:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1997720' 00:05:27.646 killing process with pid 1997720 00:05:27.646 21:51:38 -- common/autotest_common.sh@945 -- # kill 1997720 00:05:27.646 21:51:38 -- common/autotest_common.sh@950 -- # wait 1997720 00:05:27.906 00:05:27.906 real 0m2.311s 00:05:27.906 user 0m2.863s 00:05:27.906 sys 0m0.705s 00:05:27.906 21:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.906 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:27.906 ************************************ 00:05:27.906 END TEST rpc 00:05:27.906 ************************************ 00:05:28.165 21:51:39 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.165 21:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.165 21:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.165 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.165 ************************************ 00:05:28.165 START TEST rpc_client 00:05:28.165 ************************************ 00:05:28.165 21:51:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.165 * Looking for test storage... 00:05:28.165 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:28.165 21:51:39 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:28.165 OK 00:05:28.165 21:51:39 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.165 00:05:28.165 real 0m0.114s 00:05:28.165 user 0m0.044s 00:05:28.165 sys 0m0.080s 00:05:28.165 21:51:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.165 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.165 ************************************ 00:05:28.165 END TEST rpc_client 00:05:28.165 ************************************ 00:05:28.165 21:51:39 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.165 21:51:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.165 21:51:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.165 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.165 ************************************ 00:05:28.165 START TEST json_config 00:05:28.165 ************************************ 00:05:28.165 21:51:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.425 21:51:39 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.425 21:51:39 -- nvmf/common.sh@7 -- # uname -s 00:05:28.425 21:51:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.425 21:51:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.425 21:51:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.425 21:51:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.425 21:51:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.425 21:51:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.425 21:51:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.425 21:51:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.425 21:51:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.425 21:51:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.425 21:51:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:28.425 21:51:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:28.425 21:51:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.425 21:51:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.425 21:51:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.425 21:51:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:28.425 21:51:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.425 21:51:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.425 21:51:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.425 21:51:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.425 21:51:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.425 21:51:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.425 21:51:39 -- paths/export.sh@5 -- # export PATH 00:05:28.425 21:51:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.425 21:51:39 -- nvmf/common.sh@46 -- # : 0 00:05:28.425 21:51:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:28.425 21:51:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:28.425 21:51:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:28.425 21:51:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.425 21:51:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.425 21:51:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:28.425 21:51:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:28.425 21:51:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:28.425 21:51:39 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:28.425 21:51:39 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:28.425 21:51:39 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:28.425 21:51:39 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.425 21:51:39 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.425 21:51:39 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:28.425 21:51:39 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.425 21:51:39 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:28.425 21:51:39 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.425 21:51:39 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:28.425 21:51:39 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.425 21:51:39 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:28.425 21:51:39 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:28.425 21:51:39 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.425 21:51:39 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:28.425 INFO: JSON configuration test init 00:05:28.425 21:51:39 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:28.425 21:51:39 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:28.425 21:51:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.425 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 21:51:39 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:28.425 21:51:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.425 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.425 21:51:39 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.425 21:51:39 -- json_config/json_config.sh@98 -- # local app=target 00:05:28.425 21:51:39 -- json_config/json_config.sh@99 -- # shift 00:05:28.425 21:51:39 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:28.425 21:51:39 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:28.425 21:51:39 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:28.425 21:51:39 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:28.425 21:51:39 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:28.426 21:51:39 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.426 21:51:39 -- json_config/json_config.sh@111 -- # app_pid[$app]=1998343 00:05:28.426 21:51:39 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:28.426 Waiting for target to run... 00:05:28.426 21:51:39 -- json_config/json_config.sh@114 -- # waitforlisten 1998343 /var/tmp/spdk_tgt.sock 00:05:28.426 21:51:39 -- common/autotest_common.sh@819 -- # '[' -z 1998343 ']' 00:05:28.426 21:51:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.426 21:51:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:28.426 21:51:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.426 21:51:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:28.426 21:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:28.426 [2024-07-26 21:51:39.472496] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:28.426 [2024-07-26 21:51:39.472553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998343 ] 00:05:28.426 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.685 [2024-07-26 21:51:39.764747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.685 [2024-07-26 21:51:39.783573] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:28.685 [2024-07-26 21:51:39.783685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.253 21:51:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:29.253 21:51:40 -- common/autotest_common.sh@852 -- # return 0 00:05:29.253 21:51:40 -- json_config/json_config.sh@115 -- # echo '' 00:05:29.253 00:05:29.253 21:51:40 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:29.253 21:51:40 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:29.253 21:51:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.253 21:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.253 21:51:40 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:29.253 21:51:40 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:29.253 21:51:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.253 21:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:29.253 21:51:40 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.253 21:51:40 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:29.253 21:51:40 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.540 21:51:43 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:32.540 21:51:43 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:32.540 21:51:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.540 21:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.540 21:51:43 -- json_config/json_config.sh@48 -- # local ret=0 00:05:32.540 21:51:43 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.540 21:51:43 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:32.540 21:51:43 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:32.540 21:51:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.540 21:51:43 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:32.540 21:51:43 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:32.540 21:51:43 -- json_config/json_config.sh@51 -- # local get_types 00:05:32.540 21:51:43 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:32.540 21:51:43 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:32.540 21:51:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:32.540 21:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.540 21:51:43 -- json_config/json_config.sh@58 -- # return 0 00:05:32.540 21:51:43 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:32.540 21:51:43 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:32.540 21:51:43 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:32.540 21:51:43 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:32.540 21:51:43 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:32.540 21:51:43 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:32.540 21:51:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.540 21:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:32.540 21:51:43 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:32.540 21:51:43 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:05:32.540 21:51:43 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:05:32.540 21:51:43 -- json_config/json_config.sh@287 -- # nvmftestinit 00:05:32.540 21:51:43 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:05:32.540 21:51:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:32.540 21:51:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:05:32.540 21:51:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:05:32.540 21:51:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:05:32.540 21:51:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.540 21:51:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:32.540 21:51:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:32.540 21:51:43 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:05:32.540 21:51:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:05:32.540 21:51:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:05:32.540 21:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:40.657 21:51:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:40.657 21:51:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:05:40.657 21:51:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:05:40.657 21:51:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:05:40.657 21:51:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:05:40.657 21:51:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:05:40.657 21:51:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:05:40.657 21:51:51 -- nvmf/common.sh@294 -- # net_devs=() 00:05:40.657 21:51:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:05:40.657 21:51:51 -- nvmf/common.sh@295 -- # e810=() 00:05:40.657 21:51:51 -- nvmf/common.sh@295 -- # local -ga e810 00:05:40.657 21:51:51 -- nvmf/common.sh@296 -- # x722=() 00:05:40.657 21:51:51 -- nvmf/common.sh@296 -- # local -ga x722 00:05:40.657 21:51:51 -- nvmf/common.sh@297 -- # mlx=() 00:05:40.657 21:51:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:05:40.657 21:51:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:40.657 21:51:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:05:40.657 21:51:51 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:05:40.657 21:51:51 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:05:40.657 21:51:51 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:05:40.657 21:51:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:05:40.657 21:51:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:40.657 21:51:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:40.657 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:40.657 21:51:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:40.657 21:51:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:05:40.657 21:51:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:40.657 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:40.657 21:51:51 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:05:40.657 21:51:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:05:40.657 21:51:51 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:05:40.657 21:51:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:40.657 21:51:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.657 21:51:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:40.657 21:51:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.657 21:51:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:40.657 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:40.657 21:51:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.657 21:51:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:05:40.657 21:51:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:40.657 21:51:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:05:40.657 21:51:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:40.657 21:51:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:40.657 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:40.658 21:51:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:05:40.658 21:51:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:05:40.658 21:51:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:05:40.658 21:51:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:05:40.658 21:51:51 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:05:40.658 21:51:51 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:05:40.658 21:51:51 -- nvmf/common.sh@408 -- # rdma_device_init 00:05:40.658 21:51:51 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:05:40.658 21:51:51 -- nvmf/common.sh@57 -- # uname 00:05:40.658 21:51:51 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:05:40.658 21:51:51 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:05:40.658 21:51:51 -- nvmf/common.sh@62 -- # modprobe ib_core 00:05:40.658 21:51:51 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:05:40.658 21:51:51 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:05:40.658 21:51:51 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:05:40.658 21:51:51 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:05:40.658 21:51:51 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:05:40.658 21:51:51 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:05:40.658 21:51:51 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:40.658 21:51:51 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:05:40.658 21:51:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:40.658 21:51:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:40.658 21:51:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:40.658 21:51:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:40.658 21:51:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:40.658 21:51:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.658 21:51:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.658 21:51:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:40.658 21:51:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:40.658 21:51:51 -- nvmf/common.sh@104 -- # continue 2 00:05:40.658 21:51:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.658 21:51:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.658 21:51:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:40.658 21:51:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.658 21:51:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:40.658 21:51:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:40.658 21:51:51 -- nvmf/common.sh@104 -- # continue 2 00:05:40.658 21:51:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:40.658 21:51:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:05:40.658 21:51:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:40.658 21:51:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:40.658 21:51:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.658 21:51:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.916 21:51:51 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:05:40.916 21:51:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:05:40.916 21:51:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:05:40.916 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:40.916 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:40.916 altname enp217s0f0np0 00:05:40.916 altname ens818f0np0 00:05:40.916 inet 192.168.100.8/24 scope global mlx_0_0 00:05:40.917 valid_lft forever preferred_lft forever 00:05:40.917 21:51:51 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:05:40.917 21:51:51 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.917 21:51:51 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:05:40.917 21:51:51 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:05:40.917 21:51:51 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:05:40.917 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:40.917 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:40.917 altname enp217s0f1np1 00:05:40.917 altname ens818f1np1 00:05:40.917 inet 192.168.100.9/24 scope global mlx_0_1 00:05:40.917 valid_lft forever preferred_lft forever 00:05:40.917 21:51:51 -- nvmf/common.sh@410 -- # return 0 00:05:40.917 21:51:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:05:40.917 21:51:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:40.917 21:51:51 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:05:40.917 21:51:51 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:05:40.917 21:51:51 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:05:40.917 21:51:51 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:40.917 21:51:51 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:05:40.917 21:51:51 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:05:40.917 21:51:51 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:40.917 21:51:51 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:05:40.917 21:51:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.917 21:51:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.917 21:51:51 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:40.917 21:51:51 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:05:40.917 21:51:51 -- nvmf/common.sh@104 -- # continue 2 00:05:40.917 21:51:51 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:05:40.917 21:51:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.917 21:51:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:40.917 21:51:51 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:40.917 21:51:51 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:40.917 21:51:51 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@104 -- # continue 2 00:05:40.917 21:51:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:40.917 21:51:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:05:40.917 21:51:51 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.917 21:51:51 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:05:40.917 21:51:51 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:05:40.917 21:51:51 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:05:40.917 21:51:51 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:05:40.917 192.168.100.9' 00:05:40.917 21:51:51 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:05:40.917 192.168.100.9' 00:05:40.917 21:51:51 -- nvmf/common.sh@445 -- # head -n 1 00:05:40.917 21:51:51 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:40.917 21:51:51 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:40.917 192.168.100.9' 00:05:40.917 21:51:51 -- nvmf/common.sh@446 -- # head -n 1 00:05:40.917 21:51:51 -- nvmf/common.sh@446 -- # tail -n +2 00:05:40.917 21:51:51 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:40.917 21:51:51 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:05:40.917 21:51:51 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:40.917 21:51:51 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:05:40.917 21:51:51 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:05:40.917 21:51:51 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:05:40.917 21:51:52 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:05:40.917 21:51:52 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:40.917 21:51:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.176 MallocForNvmf0 00:05:41.176 21:51:52 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.176 21:51:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.176 MallocForNvmf1 00:05:41.176 21:51:52 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:41.176 21:51:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:41.435 [2024-07-26 21:51:52.509632] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:41.435 [2024-07-26 21:51:52.539802] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe37480/0xe44a00) succeed. 00:05:41.435 [2024-07-26 21:51:52.552509] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe39670/0xec4a40) succeed. 00:05:41.435 21:51:52 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.435 21:51:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:41.694 21:51:52 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.694 21:51:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:41.954 21:51:52 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.954 21:51:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:41.954 21:51:53 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:41.954 21:51:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:42.221 [2024-07-26 21:51:53.237430] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:42.221 21:51:53 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:42.221 21:51:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.221 21:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.221 21:51:53 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:42.221 21:51:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.221 21:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.221 21:51:53 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:42.221 21:51:53 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:42.221 21:51:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:42.478 MallocBdevForConfigChangeCheck 00:05:42.478 21:51:53 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:42.478 21:51:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:42.478 21:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:42.479 21:51:53 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:42.479 21:51:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.737 21:51:53 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:42.737 INFO: shutting down applications... 00:05:42.737 21:51:53 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:42.737 21:51:53 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:42.737 21:51:53 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:42.737 21:51:53 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:45.271 Calling clear_iscsi_subsystem 00:05:45.271 Calling clear_nvmf_subsystem 00:05:45.271 Calling clear_nbd_subsystem 00:05:45.271 Calling clear_ublk_subsystem 00:05:45.271 Calling clear_vhost_blk_subsystem 00:05:45.271 Calling clear_vhost_scsi_subsystem 00:05:45.271 Calling clear_scheduler_subsystem 00:05:45.271 Calling clear_bdev_subsystem 00:05:45.271 Calling clear_accel_subsystem 00:05:45.271 Calling clear_vmd_subsystem 00:05:45.271 Calling clear_sock_subsystem 00:05:45.271 Calling clear_iobuf_subsystem 00:05:45.271 21:51:56 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:45.271 21:51:56 -- json_config/json_config.sh@396 -- # count=100 00:05:45.271 21:51:56 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:45.271 21:51:56 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.271 21:51:56 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:45.271 21:51:56 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:45.530 21:51:56 -- json_config/json_config.sh@398 -- # break 00:05:45.530 21:51:56 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:45.530 21:51:56 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:45.530 21:51:56 -- json_config/json_config.sh@120 -- # local app=target 00:05:45.530 21:51:56 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:45.530 21:51:56 -- json_config/json_config.sh@124 -- # [[ -n 1998343 ]] 00:05:45.530 21:51:56 -- json_config/json_config.sh@127 -- # kill -SIGINT 1998343 00:05:45.530 21:51:56 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:45.530 21:51:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:45.530 21:51:56 -- json_config/json_config.sh@130 -- # kill -0 1998343 00:05:45.530 21:51:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:46.099 21:51:57 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:46.099 21:51:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:46.099 21:51:57 -- json_config/json_config.sh@130 -- # kill -0 1998343 00:05:46.099 21:51:57 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:46.099 21:51:57 -- json_config/json_config.sh@132 -- # break 00:05:46.099 21:51:57 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:46.099 21:51:57 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:46.099 SPDK target shutdown done 00:05:46.099 21:51:57 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:46.099 INFO: relaunching applications... 00:05:46.099 21:51:57 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.099 21:51:57 -- json_config/json_config.sh@98 -- # local app=target 00:05:46.099 21:51:57 -- json_config/json_config.sh@99 -- # shift 00:05:46.099 21:51:57 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:46.099 21:51:57 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:46.099 21:51:57 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:46.099 21:51:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.099 21:51:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:46.099 21:51:57 -- json_config/json_config.sh@111 -- # app_pid[$app]=2004218 00:05:46.099 21:51:57 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:46.099 Waiting for target to run... 00:05:46.099 21:51:57 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.099 21:51:57 -- json_config/json_config.sh@114 -- # waitforlisten 2004218 /var/tmp/spdk_tgt.sock 00:05:46.099 21:51:57 -- common/autotest_common.sh@819 -- # '[' -z 2004218 ']' 00:05:46.099 21:51:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.099 21:51:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.099 21:51:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.099 21:51:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.099 21:51:57 -- common/autotest_common.sh@10 -- # set +x 00:05:46.099 [2024-07-26 21:51:57.209282] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:46.099 [2024-07-26 21:51:57.209343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004218 ] 00:05:46.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.358 [2024-07-26 21:51:57.511841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.359 [2024-07-26 21:51:57.530947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.359 [2024-07-26 21:51:57.531043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.696 [2024-07-26 21:52:00.559915] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20fb7d0/0x1f65e60) succeed. 00:05:49.696 [2024-07-26 21:52:00.571027] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20fb9a0/0x1fe5f00) succeed. 00:05:49.696 [2024-07-26 21:52:00.620592] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:50.263 21:52:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.263 21:52:01 -- common/autotest_common.sh@852 -- # return 0 00:05:50.263 21:52:01 -- json_config/json_config.sh@115 -- # echo '' 00:05:50.263 00:05:50.263 21:52:01 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:50.263 21:52:01 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:50.263 INFO: Checking if target configuration is the same... 00:05:50.263 21:52:01 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.263 21:52:01 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:50.263 21:52:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.263 + '[' 2 -ne 2 ']' 00:05:50.263 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.263 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:50.263 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:50.263 +++ basename /dev/fd/62 00:05:50.263 ++ mktemp /tmp/62.XXX 00:05:50.263 + tmp_file_1=/tmp/62.NUO 00:05:50.263 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.263 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.264 + tmp_file_2=/tmp/spdk_tgt_config.json.YZw 00:05:50.264 + ret=0 00:05:50.264 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.522 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:50.522 + diff -u /tmp/62.NUO /tmp/spdk_tgt_config.json.YZw 00:05:50.522 + echo 'INFO: JSON config files are the same' 00:05:50.522 INFO: JSON config files are the same 00:05:50.522 + rm /tmp/62.NUO /tmp/spdk_tgt_config.json.YZw 00:05:50.522 + exit 0 00:05:50.522 21:52:01 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:50.522 21:52:01 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.522 INFO: changing configuration and checking if this can be detected... 00:05:50.522 21:52:01 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.522 21:52:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.782 21:52:01 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.782 21:52:01 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:50.782 21:52:01 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.782 + '[' 2 -ne 2 ']' 00:05:50.782 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.782 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:50.782 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:50.782 +++ basename /dev/fd/62 00:05:50.782 ++ mktemp /tmp/62.XXX 00:05:50.782 + tmp_file_1=/tmp/62.jDm 00:05:50.782 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.782 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.782 + tmp_file_2=/tmp/spdk_tgt_config.json.GN0 00:05:50.782 + ret=0 00:05:50.782 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.041 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.041 + diff -u /tmp/62.jDm /tmp/spdk_tgt_config.json.GN0 00:05:51.041 + ret=1 00:05:51.041 + echo '=== Start of file: /tmp/62.jDm ===' 00:05:51.041 + cat /tmp/62.jDm 00:05:51.041 + echo '=== End of file: /tmp/62.jDm ===' 00:05:51.041 + echo '' 00:05:51.041 + echo '=== Start of file: /tmp/spdk_tgt_config.json.GN0 ===' 00:05:51.041 + cat /tmp/spdk_tgt_config.json.GN0 00:05:51.041 + echo '=== End of file: /tmp/spdk_tgt_config.json.GN0 ===' 00:05:51.041 + echo '' 00:05:51.041 + rm /tmp/62.jDm /tmp/spdk_tgt_config.json.GN0 00:05:51.041 + exit 1 00:05:51.041 21:52:02 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:51.041 INFO: configuration change detected. 00:05:51.041 21:52:02 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:51.041 21:52:02 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:51.041 21:52:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.041 21:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.041 21:52:02 -- json_config/json_config.sh@360 -- # local ret=0 00:05:51.041 21:52:02 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:51.041 21:52:02 -- json_config/json_config.sh@370 -- # [[ -n 2004218 ]] 00:05:51.041 21:52:02 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:51.041 21:52:02 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.041 21:52:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.041 21:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.041 21:52:02 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:51.041 21:52:02 -- json_config/json_config.sh@246 -- # uname -s 00:05:51.041 21:52:02 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:51.041 21:52:02 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:51.041 21:52:02 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:51.041 21:52:02 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.041 21:52:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.041 21:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:51.041 21:52:02 -- json_config/json_config.sh@376 -- # killprocess 2004218 00:05:51.041 21:52:02 -- common/autotest_common.sh@926 -- # '[' -z 2004218 ']' 00:05:51.041 21:52:02 -- common/autotest_common.sh@930 -- # kill -0 2004218 00:05:51.041 21:52:02 -- common/autotest_common.sh@931 -- # uname 00:05:51.041 21:52:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.041 21:52:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2004218 00:05:51.300 21:52:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.300 21:52:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.300 21:52:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2004218' 00:05:51.300 killing process with pid 2004218 00:05:51.300 21:52:02 -- common/autotest_common.sh@945 -- # kill 2004218 00:05:51.300 21:52:02 -- common/autotest_common.sh@950 -- # wait 2004218 00:05:53.835 21:52:04 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.835 21:52:04 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:53.835 21:52:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.835 21:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.835 21:52:04 -- json_config/json_config.sh@381 -- # return 0 00:05:53.835 21:52:04 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:53.835 INFO: Success 00:05:53.835 21:52:04 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:53.835 21:52:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:05:53.835 21:52:04 -- nvmf/common.sh@116 -- # sync 00:05:53.835 21:52:04 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:05:53.835 21:52:04 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:05:53.835 21:52:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:05:53.835 21:52:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:05:53.835 21:52:04 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:05:53.835 00:05:53.835 real 0m25.516s 00:05:53.835 user 0m28.678s 00:05:53.835 sys 0m8.607s 00:05:53.835 21:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.835 21:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.835 ************************************ 00:05:53.835 END TEST json_config 00:05:53.835 ************************************ 00:05:53.835 21:52:04 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.835 21:52:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.835 21:52:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.835 21:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.835 ************************************ 00:05:53.835 START TEST json_config_extra_key 00:05:53.835 ************************************ 00:05:53.835 21:52:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:53.835 21:52:04 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:53.835 21:52:04 -- nvmf/common.sh@7 -- # uname -s 00:05:53.835 21:52:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:53.835 21:52:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:53.835 21:52:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:53.835 21:52:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:53.835 21:52:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:53.835 21:52:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:53.835 21:52:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:53.835 21:52:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:53.835 21:52:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:53.836 21:52:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:53.836 21:52:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:53.836 21:52:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:53.836 21:52:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:53.836 21:52:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:53.836 21:52:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:53.836 21:52:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:53.836 21:52:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:53.836 21:52:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:53.836 21:52:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:53.836 21:52:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.836 21:52:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.836 21:52:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.836 21:52:04 -- paths/export.sh@5 -- # export PATH 00:05:53.836 21:52:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:53.836 21:52:04 -- nvmf/common.sh@46 -- # : 0 00:05:53.836 21:52:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:53.836 21:52:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:53.836 21:52:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:53.836 21:52:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:53.836 21:52:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:53.836 21:52:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:53.836 21:52:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:53.836 21:52:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:53.836 INFO: launching applications... 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2005696 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:53.836 Waiting for target to run... 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2005696 /var/tmp/spdk_tgt.sock 00:05:53.836 21:52:04 -- common/autotest_common.sh@819 -- # '[' -z 2005696 ']' 00:05:53.836 21:52:04 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:53.836 21:52:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:53.836 21:52:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:53.836 21:52:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:53.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:53.836 21:52:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:53.836 21:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:53.836 [2024-07-26 21:52:05.028083] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.836 [2024-07-26 21:52:05.028140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005696 ] 00:05:54.095 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.354 [2024-07-26 21:52:05.333391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.354 [2024-07-26 21:52:05.352691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.354 [2024-07-26 21:52:05.352793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.613 21:52:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.613 21:52:05 -- common/autotest_common.sh@852 -- # return 0 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:54.613 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:54.613 INFO: shutting down applications... 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2005696 ]] 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2005696 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2005696 00:05:54.613 21:52:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2005696 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:55.182 SPDK target shutdown done 00:05:55.182 21:52:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:55.182 Success 00:05:55.182 00:05:55.182 real 0m1.430s 00:05:55.182 user 0m1.156s 00:05:55.182 sys 0m0.390s 00:05:55.182 21:52:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.182 21:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.182 ************************************ 00:05:55.182 END TEST json_config_extra_key 00:05:55.182 ************************************ 00:05:55.182 21:52:06 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.182 21:52:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.182 21:52:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.182 21:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.182 ************************************ 00:05:55.182 START TEST alias_rpc 00:05:55.182 ************************************ 00:05:55.182 21:52:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.441 * Looking for test storage... 00:05:55.441 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:55.441 21:52:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.441 21:52:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2005995 00:05:55.441 21:52:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2005995 00:05:55.441 21:52:06 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.441 21:52:06 -- common/autotest_common.sh@819 -- # '[' -z 2005995 ']' 00:05:55.441 21:52:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.441 21:52:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.441 21:52:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.441 21:52:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.441 21:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:55.441 [2024-07-26 21:52:06.507527] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.441 [2024-07-26 21:52:06.507589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005995 ] 00:05:55.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.441 [2024-07-26 21:52:06.592525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.441 [2024-07-26 21:52:06.629742] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.441 [2024-07-26 21:52:06.629853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.377 21:52:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.377 21:52:07 -- common/autotest_common.sh@852 -- # return 0 00:05:56.377 21:52:07 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:56.377 21:52:07 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2005995 00:05:56.377 21:52:07 -- common/autotest_common.sh@926 -- # '[' -z 2005995 ']' 00:05:56.377 21:52:07 -- common/autotest_common.sh@930 -- # kill -0 2005995 00:05:56.377 21:52:07 -- common/autotest_common.sh@931 -- # uname 00:05:56.377 21:52:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.377 21:52:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2005995 00:05:56.377 21:52:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.377 21:52:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.377 21:52:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2005995' 00:05:56.377 killing process with pid 2005995 00:05:56.377 21:52:07 -- common/autotest_common.sh@945 -- # kill 2005995 00:05:56.377 21:52:07 -- common/autotest_common.sh@950 -- # wait 2005995 00:05:56.636 00:05:56.636 real 0m1.478s 00:05:56.636 user 0m1.545s 00:05:56.636 sys 0m0.466s 00:05:56.636 21:52:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.636 21:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.636 ************************************ 00:05:56.636 END TEST alias_rpc 00:05:56.636 ************************************ 00:05:56.896 21:52:07 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:56.896 21:52:07 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.896 21:52:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.896 21:52:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.896 21:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.896 ************************************ 00:05:56.896 START TEST spdkcli_tcp 00:05:56.896 ************************************ 00:05:56.896 21:52:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:56.896 * Looking for test storage... 00:05:56.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:56.896 21:52:07 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:56.896 21:52:07 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:56.896 21:52:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:56.896 21:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2006291 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@27 -- # waitforlisten 2006291 00:05:56.896 21:52:07 -- common/autotest_common.sh@819 -- # '[' -z 2006291 ']' 00:05:56.896 21:52:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.896 21:52:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.896 21:52:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.896 21:52:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.896 21:52:07 -- common/autotest_common.sh@10 -- # set +x 00:05:56.896 21:52:07 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:56.896 [2024-07-26 21:52:08.015241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:56.896 [2024-07-26 21:52:08.015303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006291 ] 00:05:56.896 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.896 [2024-07-26 21:52:08.100473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.155 [2024-07-26 21:52:08.138700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.155 [2024-07-26 21:52:08.138841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.155 [2024-07-26 21:52:08.138844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.725 21:52:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.725 21:52:08 -- common/autotest_common.sh@852 -- # return 0 00:05:57.725 21:52:08 -- spdkcli/tcp.sh@31 -- # socat_pid=2006348 00:05:57.725 21:52:08 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:57.725 21:52:08 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:57.725 [ 00:05:57.725 "bdev_malloc_delete", 00:05:57.725 "bdev_malloc_create", 00:05:57.725 "bdev_null_resize", 00:05:57.725 "bdev_null_delete", 00:05:57.725 "bdev_null_create", 00:05:57.725 "bdev_nvme_cuse_unregister", 00:05:57.725 "bdev_nvme_cuse_register", 00:05:57.725 "bdev_opal_new_user", 00:05:57.725 "bdev_opal_set_lock_state", 00:05:57.725 "bdev_opal_delete", 00:05:57.725 "bdev_opal_get_info", 00:05:57.725 "bdev_opal_create", 00:05:57.725 "bdev_nvme_opal_revert", 00:05:57.725 "bdev_nvme_opal_init", 00:05:57.725 "bdev_nvme_send_cmd", 00:05:57.725 "bdev_nvme_get_path_iostat", 00:05:57.725 "bdev_nvme_get_mdns_discovery_info", 00:05:57.725 "bdev_nvme_stop_mdns_discovery", 00:05:57.725 "bdev_nvme_start_mdns_discovery", 00:05:57.725 "bdev_nvme_set_multipath_policy", 00:05:57.725 "bdev_nvme_set_preferred_path", 00:05:57.725 "bdev_nvme_get_io_paths", 00:05:57.725 "bdev_nvme_remove_error_injection", 00:05:57.725 "bdev_nvme_add_error_injection", 00:05:57.725 "bdev_nvme_get_discovery_info", 00:05:57.725 "bdev_nvme_stop_discovery", 00:05:57.725 "bdev_nvme_start_discovery", 00:05:57.725 "bdev_nvme_get_controller_health_info", 00:05:57.725 "bdev_nvme_disable_controller", 00:05:57.725 "bdev_nvme_enable_controller", 00:05:57.725 "bdev_nvme_reset_controller", 00:05:57.725 "bdev_nvme_get_transport_statistics", 00:05:57.725 "bdev_nvme_apply_firmware", 00:05:57.725 "bdev_nvme_detach_controller", 00:05:57.725 "bdev_nvme_get_controllers", 00:05:57.725 "bdev_nvme_attach_controller", 00:05:57.725 "bdev_nvme_set_hotplug", 00:05:57.725 "bdev_nvme_set_options", 00:05:57.725 "bdev_passthru_delete", 00:05:57.725 "bdev_passthru_create", 00:05:57.725 "bdev_lvol_grow_lvstore", 00:05:57.725 "bdev_lvol_get_lvols", 00:05:57.725 "bdev_lvol_get_lvstores", 00:05:57.725 "bdev_lvol_delete", 00:05:57.725 "bdev_lvol_set_read_only", 00:05:57.725 "bdev_lvol_resize", 00:05:57.725 "bdev_lvol_decouple_parent", 00:05:57.725 "bdev_lvol_inflate", 00:05:57.725 "bdev_lvol_rename", 00:05:57.725 "bdev_lvol_clone_bdev", 00:05:57.725 "bdev_lvol_clone", 00:05:57.725 "bdev_lvol_snapshot", 00:05:57.725 "bdev_lvol_create", 00:05:57.725 "bdev_lvol_delete_lvstore", 00:05:57.725 "bdev_lvol_rename_lvstore", 00:05:57.725 "bdev_lvol_create_lvstore", 00:05:57.725 "bdev_raid_set_options", 00:05:57.726 "bdev_raid_remove_base_bdev", 00:05:57.726 "bdev_raid_add_base_bdev", 00:05:57.726 "bdev_raid_delete", 00:05:57.726 "bdev_raid_create", 00:05:57.726 "bdev_raid_get_bdevs", 00:05:57.726 "bdev_error_inject_error", 00:05:57.726 "bdev_error_delete", 00:05:57.726 "bdev_error_create", 00:05:57.726 "bdev_split_delete", 00:05:57.726 "bdev_split_create", 00:05:57.726 "bdev_delay_delete", 00:05:57.726 "bdev_delay_create", 00:05:57.726 "bdev_delay_update_latency", 00:05:57.726 "bdev_zone_block_delete", 00:05:57.726 "bdev_zone_block_create", 00:05:57.726 "blobfs_create", 00:05:57.726 "blobfs_detect", 00:05:57.726 "blobfs_set_cache_size", 00:05:57.726 "bdev_aio_delete", 00:05:57.726 "bdev_aio_rescan", 00:05:57.726 "bdev_aio_create", 00:05:57.726 "bdev_ftl_set_property", 00:05:57.726 "bdev_ftl_get_properties", 00:05:57.726 "bdev_ftl_get_stats", 00:05:57.726 "bdev_ftl_unmap", 00:05:57.726 "bdev_ftl_unload", 00:05:57.726 "bdev_ftl_delete", 00:05:57.726 "bdev_ftl_load", 00:05:57.726 "bdev_ftl_create", 00:05:57.726 "bdev_virtio_attach_controller", 00:05:57.726 "bdev_virtio_scsi_get_devices", 00:05:57.726 "bdev_virtio_detach_controller", 00:05:57.726 "bdev_virtio_blk_set_hotplug", 00:05:57.726 "bdev_iscsi_delete", 00:05:57.726 "bdev_iscsi_create", 00:05:57.726 "bdev_iscsi_set_options", 00:05:57.726 "accel_error_inject_error", 00:05:57.726 "ioat_scan_accel_module", 00:05:57.726 "dsa_scan_accel_module", 00:05:57.726 "iaa_scan_accel_module", 00:05:57.726 "iscsi_set_options", 00:05:57.726 "iscsi_get_auth_groups", 00:05:57.726 "iscsi_auth_group_remove_secret", 00:05:57.726 "iscsi_auth_group_add_secret", 00:05:57.726 "iscsi_delete_auth_group", 00:05:57.726 "iscsi_create_auth_group", 00:05:57.726 "iscsi_set_discovery_auth", 00:05:57.726 "iscsi_get_options", 00:05:57.726 "iscsi_target_node_request_logout", 00:05:57.726 "iscsi_target_node_set_redirect", 00:05:57.726 "iscsi_target_node_set_auth", 00:05:57.726 "iscsi_target_node_add_lun", 00:05:57.726 "iscsi_get_connections", 00:05:57.726 "iscsi_portal_group_set_auth", 00:05:57.726 "iscsi_start_portal_group", 00:05:57.726 "iscsi_delete_portal_group", 00:05:57.726 "iscsi_create_portal_group", 00:05:57.726 "iscsi_get_portal_groups", 00:05:57.726 "iscsi_delete_target_node", 00:05:57.726 "iscsi_target_node_remove_pg_ig_maps", 00:05:57.726 "iscsi_target_node_add_pg_ig_maps", 00:05:57.726 "iscsi_create_target_node", 00:05:57.726 "iscsi_get_target_nodes", 00:05:57.726 "iscsi_delete_initiator_group", 00:05:57.726 "iscsi_initiator_group_remove_initiators", 00:05:57.726 "iscsi_initiator_group_add_initiators", 00:05:57.726 "iscsi_create_initiator_group", 00:05:57.726 "iscsi_get_initiator_groups", 00:05:57.726 "nvmf_set_crdt", 00:05:57.726 "nvmf_set_config", 00:05:57.726 "nvmf_set_max_subsystems", 00:05:57.726 "nvmf_subsystem_get_listeners", 00:05:57.726 "nvmf_subsystem_get_qpairs", 00:05:57.726 "nvmf_subsystem_get_controllers", 00:05:57.726 "nvmf_get_stats", 00:05:57.726 "nvmf_get_transports", 00:05:57.726 "nvmf_create_transport", 00:05:57.726 "nvmf_get_targets", 00:05:57.726 "nvmf_delete_target", 00:05:57.726 "nvmf_create_target", 00:05:57.726 "nvmf_subsystem_allow_any_host", 00:05:57.726 "nvmf_subsystem_remove_host", 00:05:57.726 "nvmf_subsystem_add_host", 00:05:57.726 "nvmf_subsystem_remove_ns", 00:05:57.726 "nvmf_subsystem_add_ns", 00:05:57.726 "nvmf_subsystem_listener_set_ana_state", 00:05:57.726 "nvmf_discovery_get_referrals", 00:05:57.726 "nvmf_discovery_remove_referral", 00:05:57.726 "nvmf_discovery_add_referral", 00:05:57.726 "nvmf_subsystem_remove_listener", 00:05:57.726 "nvmf_subsystem_add_listener", 00:05:57.726 "nvmf_delete_subsystem", 00:05:57.726 "nvmf_create_subsystem", 00:05:57.726 "nvmf_get_subsystems", 00:05:57.726 "env_dpdk_get_mem_stats", 00:05:57.726 "nbd_get_disks", 00:05:57.726 "nbd_stop_disk", 00:05:57.726 "nbd_start_disk", 00:05:57.726 "ublk_recover_disk", 00:05:57.726 "ublk_get_disks", 00:05:57.726 "ublk_stop_disk", 00:05:57.726 "ublk_start_disk", 00:05:57.726 "ublk_destroy_target", 00:05:57.726 "ublk_create_target", 00:05:57.726 "virtio_blk_create_transport", 00:05:57.726 "virtio_blk_get_transports", 00:05:57.726 "vhost_controller_set_coalescing", 00:05:57.726 "vhost_get_controllers", 00:05:57.726 "vhost_delete_controller", 00:05:57.726 "vhost_create_blk_controller", 00:05:57.726 "vhost_scsi_controller_remove_target", 00:05:57.726 "vhost_scsi_controller_add_target", 00:05:57.726 "vhost_start_scsi_controller", 00:05:57.726 "vhost_create_scsi_controller", 00:05:57.726 "thread_set_cpumask", 00:05:57.726 "framework_get_scheduler", 00:05:57.726 "framework_set_scheduler", 00:05:57.726 "framework_get_reactors", 00:05:57.726 "thread_get_io_channels", 00:05:57.726 "thread_get_pollers", 00:05:57.726 "thread_get_stats", 00:05:57.726 "framework_monitor_context_switch", 00:05:57.726 "spdk_kill_instance", 00:05:57.726 "log_enable_timestamps", 00:05:57.726 "log_get_flags", 00:05:57.726 "log_clear_flag", 00:05:57.726 "log_set_flag", 00:05:57.726 "log_get_level", 00:05:57.726 "log_set_level", 00:05:57.726 "log_get_print_level", 00:05:57.726 "log_set_print_level", 00:05:57.726 "framework_enable_cpumask_locks", 00:05:57.726 "framework_disable_cpumask_locks", 00:05:57.726 "framework_wait_init", 00:05:57.726 "framework_start_init", 00:05:57.726 "scsi_get_devices", 00:05:57.726 "bdev_get_histogram", 00:05:57.726 "bdev_enable_histogram", 00:05:57.726 "bdev_set_qos_limit", 00:05:57.726 "bdev_set_qd_sampling_period", 00:05:57.726 "bdev_get_bdevs", 00:05:57.726 "bdev_reset_iostat", 00:05:57.726 "bdev_get_iostat", 00:05:57.726 "bdev_examine", 00:05:57.726 "bdev_wait_for_examine", 00:05:57.726 "bdev_set_options", 00:05:57.726 "notify_get_notifications", 00:05:57.726 "notify_get_types", 00:05:57.726 "accel_get_stats", 00:05:57.726 "accel_set_options", 00:05:57.726 "accel_set_driver", 00:05:57.726 "accel_crypto_key_destroy", 00:05:57.726 "accel_crypto_keys_get", 00:05:57.726 "accel_crypto_key_create", 00:05:57.726 "accel_assign_opc", 00:05:57.726 "accel_get_module_info", 00:05:57.726 "accel_get_opc_assignments", 00:05:57.726 "vmd_rescan", 00:05:57.726 "vmd_remove_device", 00:05:57.726 "vmd_enable", 00:05:57.726 "sock_set_default_impl", 00:05:57.726 "sock_impl_set_options", 00:05:57.726 "sock_impl_get_options", 00:05:57.726 "iobuf_get_stats", 00:05:57.726 "iobuf_set_options", 00:05:57.726 "framework_get_pci_devices", 00:05:57.726 "framework_get_config", 00:05:57.726 "framework_get_subsystems", 00:05:57.726 "trace_get_info", 00:05:57.726 "trace_get_tpoint_group_mask", 00:05:57.726 "trace_disable_tpoint_group", 00:05:57.726 "trace_enable_tpoint_group", 00:05:57.726 "trace_clear_tpoint_mask", 00:05:57.726 "trace_set_tpoint_mask", 00:05:57.726 "spdk_get_version", 00:05:57.726 "rpc_get_methods" 00:05:57.726 ] 00:05:57.985 21:52:08 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:57.985 21:52:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:57.985 21:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:57.985 21:52:08 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:57.985 21:52:08 -- spdkcli/tcp.sh@38 -- # killprocess 2006291 00:05:57.985 21:52:08 -- common/autotest_common.sh@926 -- # '[' -z 2006291 ']' 00:05:57.985 21:52:08 -- common/autotest_common.sh@930 -- # kill -0 2006291 00:05:57.985 21:52:08 -- common/autotest_common.sh@931 -- # uname 00:05:57.985 21:52:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.985 21:52:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2006291 00:05:57.985 21:52:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.985 21:52:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.985 21:52:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2006291' 00:05:57.985 killing process with pid 2006291 00:05:57.985 21:52:09 -- common/autotest_common.sh@945 -- # kill 2006291 00:05:57.985 21:52:09 -- common/autotest_common.sh@950 -- # wait 2006291 00:05:58.244 00:05:58.244 real 0m1.465s 00:05:58.244 user 0m2.661s 00:05:58.244 sys 0m0.499s 00:05:58.244 21:52:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.244 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.244 ************************************ 00:05:58.244 END TEST spdkcli_tcp 00:05:58.244 ************************************ 00:05:58.244 21:52:09 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.244 21:52:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.244 21:52:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.244 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.244 ************************************ 00:05:58.244 START TEST dpdk_mem_utility 00:05:58.244 ************************************ 00:05:58.244 21:52:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.502 * Looking for test storage... 00:05:58.502 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:58.502 21:52:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:58.502 21:52:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2006619 00:05:58.502 21:52:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2006619 00:05:58.502 21:52:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.502 21:52:09 -- common/autotest_common.sh@819 -- # '[' -z 2006619 ']' 00:05:58.502 21:52:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.502 21:52:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.502 21:52:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.502 21:52:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.502 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:05:58.502 [2024-07-26 21:52:09.535780] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:58.502 [2024-07-26 21:52:09.535840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006619 ] 00:05:58.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.502 [2024-07-26 21:52:09.622491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.502 [2024-07-26 21:52:09.659340] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.502 [2024-07-26 21:52:09.659466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.440 21:52:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.440 21:52:10 -- common/autotest_common.sh@852 -- # return 0 00:05:59.440 21:52:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.440 21:52:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.440 21:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.440 21:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:59.440 { 00:05:59.440 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.440 } 00:05:59.440 21:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.440 21:52:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.440 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:59.440 1 heaps totaling size 814.000000 MiB 00:05:59.440 size: 814.000000 MiB heap id: 0 00:05:59.440 end heaps---------- 00:05:59.440 8 mempools totaling size 598.116089 MiB 00:05:59.440 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.440 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.440 size: 84.521057 MiB name: bdev_io_2006619 00:05:59.440 size: 51.011292 MiB name: evtpool_2006619 00:05:59.440 size: 50.003479 MiB name: msgpool_2006619 00:05:59.440 size: 21.763794 MiB name: PDU_Pool 00:05:59.440 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.441 size: 0.026123 MiB name: Session_Pool 00:05:59.441 end mempools------- 00:05:59.441 6 memzones totaling size 4.142822 MiB 00:05:59.441 size: 1.000366 MiB name: RG_ring_0_2006619 00:05:59.441 size: 1.000366 MiB name: RG_ring_1_2006619 00:05:59.441 size: 1.000366 MiB name: RG_ring_4_2006619 00:05:59.441 size: 1.000366 MiB name: RG_ring_5_2006619 00:05:59.441 size: 0.125366 MiB name: RG_ring_2_2006619 00:05:59.441 size: 0.015991 MiB name: RG_ring_3_2006619 00:05:59.441 end memzones------- 00:05:59.441 21:52:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.441 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:59.441 list of free elements. size: 12.519348 MiB 00:05:59.441 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:59.441 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:59.441 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:59.441 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:59.441 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:59.441 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:59.441 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:59.441 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:59.441 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:59.441 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:59.441 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:59.441 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:59.441 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:59.441 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:59.441 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:59.441 list of standard malloc elements. size: 199.218079 MiB 00:05:59.441 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:59.441 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:59.441 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:59.441 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:59.441 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:59.441 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:59.441 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:59.441 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:59.441 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:59.441 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:59.441 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:59.441 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:59.441 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:59.441 list of memzone associated elements. size: 602.262573 MiB 00:05:59.441 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:59.441 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.441 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:59.441 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.441 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:59.441 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2006619_0 00:05:59.441 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:59.441 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2006619_0 00:05:59.441 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:59.441 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2006619_0 00:05:59.441 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:59.441 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.441 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:59.441 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.441 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:59.441 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2006619 00:05:59.441 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:59.441 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2006619 00:05:59.441 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:59.441 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2006619 00:05:59.441 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:59.441 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.441 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:59.441 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.441 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:59.441 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.441 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:59.441 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.441 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:59.441 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2006619 00:05:59.441 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:59.441 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2006619 00:05:59.441 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:59.441 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2006619 00:05:59.441 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:59.441 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2006619 00:05:59.441 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:59.441 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2006619 00:05:59.441 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:59.441 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.441 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:59.441 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.441 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:59.441 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.441 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:59.441 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2006619 00:05:59.441 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:59.441 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.441 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:59.441 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.441 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:59.441 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2006619 00:05:59.441 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:59.441 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.441 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:59.441 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2006619 00:05:59.441 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:59.441 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2006619 00:05:59.441 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:59.441 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.441 21:52:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.441 21:52:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2006619 00:05:59.442 21:52:10 -- common/autotest_common.sh@926 -- # '[' -z 2006619 ']' 00:05:59.442 21:52:10 -- common/autotest_common.sh@930 -- # kill -0 2006619 00:05:59.442 21:52:10 -- common/autotest_common.sh@931 -- # uname 00:05:59.442 21:52:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:59.442 21:52:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2006619 00:05:59.442 21:52:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:59.442 21:52:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:59.442 21:52:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2006619' 00:05:59.442 killing process with pid 2006619 00:05:59.442 21:52:10 -- common/autotest_common.sh@945 -- # kill 2006619 00:05:59.442 21:52:10 -- common/autotest_common.sh@950 -- # wait 2006619 00:05:59.701 00:05:59.701 real 0m1.395s 00:05:59.701 user 0m1.412s 00:05:59.701 sys 0m0.452s 00:05:59.701 21:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.701 21:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:59.701 ************************************ 00:05:59.701 END TEST dpdk_mem_utility 00:05:59.701 ************************************ 00:05:59.701 21:52:10 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:59.701 21:52:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.701 21:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.701 21:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:59.701 ************************************ 00:05:59.701 START TEST event 00:05:59.701 ************************************ 00:05:59.701 21:52:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:59.701 * Looking for test storage... 00:05:59.960 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:59.960 21:52:10 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:59.960 21:52:10 -- bdev/nbd_common.sh@6 -- # set -e 00:05:59.960 21:52:10 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.960 21:52:10 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:59.960 21:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.960 21:52:10 -- common/autotest_common.sh@10 -- # set +x 00:05:59.960 ************************************ 00:05:59.960 START TEST event_perf 00:05:59.960 ************************************ 00:05:59.960 21:52:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:59.960 Running I/O for 1 seconds...[2024-07-26 21:52:10.961046] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:59.960 [2024-07-26 21:52:10.961139] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006889 ] 00:05:59.960 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.960 [2024-07-26 21:52:11.049028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:59.960 [2024-07-26 21:52:11.088260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.960 [2024-07-26 21:52:11.088358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.960 [2024-07-26 21:52:11.088443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.960 [2024-07-26 21:52:11.088445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.338 Running I/O for 1 seconds... 00:06:01.338 lcore 0: 209432 00:06:01.338 lcore 1: 209431 00:06:01.338 lcore 2: 209431 00:06:01.338 lcore 3: 209430 00:06:01.338 done. 00:06:01.338 00:06:01.338 real 0m1.208s 00:06:01.338 user 0m4.095s 00:06:01.338 sys 0m0.111s 00:06:01.338 21:52:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.338 21:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.338 ************************************ 00:06:01.338 END TEST event_perf 00:06:01.338 ************************************ 00:06:01.338 21:52:12 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:01.338 21:52:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:01.338 21:52:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.338 21:52:12 -- common/autotest_common.sh@10 -- # set +x 00:06:01.338 ************************************ 00:06:01.338 START TEST event_reactor 00:06:01.338 ************************************ 00:06:01.338 21:52:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:01.338 [2024-07-26 21:52:12.212044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.338 [2024-07-26 21:52:12.212134] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007052 ] 00:06:01.338 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.338 [2024-07-26 21:52:12.298039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.338 [2024-07-26 21:52:12.333140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.274 test_start 00:06:02.274 oneshot 00:06:02.274 tick 100 00:06:02.274 tick 100 00:06:02.274 tick 250 00:06:02.274 tick 100 00:06:02.274 tick 100 00:06:02.274 tick 100 00:06:02.274 tick 250 00:06:02.274 tick 500 00:06:02.274 tick 100 00:06:02.274 tick 100 00:06:02.274 tick 250 00:06:02.274 tick 100 00:06:02.274 tick 100 00:06:02.274 test_end 00:06:02.274 00:06:02.274 real 0m1.197s 00:06:02.274 user 0m1.099s 00:06:02.274 sys 0m0.094s 00:06:02.274 21:52:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.274 21:52:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.274 ************************************ 00:06:02.274 END TEST event_reactor 00:06:02.274 ************************************ 00:06:02.274 21:52:13 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:02.274 21:52:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:02.274 21:52:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.274 21:52:13 -- common/autotest_common.sh@10 -- # set +x 00:06:02.274 ************************************ 00:06:02.274 START TEST event_reactor_perf 00:06:02.274 ************************************ 00:06:02.274 21:52:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:02.274 [2024-07-26 21:52:13.455070] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:02.274 [2024-07-26 21:52:13.455160] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007318 ] 00:06:02.274 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.533 [2024-07-26 21:52:13.538577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.533 [2024-07-26 21:52:13.573179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.469 test_start 00:06:03.469 test_end 00:06:03.469 Performance: 522194 events per second 00:06:03.469 00:06:03.469 real 0m1.194s 00:06:03.469 user 0m1.096s 00:06:03.469 sys 0m0.094s 00:06:03.469 21:52:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.469 21:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.469 ************************************ 00:06:03.469 END TEST event_reactor_perf 00:06:03.469 ************************************ 00:06:03.469 21:52:14 -- event/event.sh@49 -- # uname -s 00:06:03.469 21:52:14 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:03.469 21:52:14 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:03.469 21:52:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.469 21:52:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.469 21:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.469 ************************************ 00:06:03.469 START TEST event_scheduler 00:06:03.469 ************************************ 00:06:03.469 21:52:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:03.728 * Looking for test storage... 00:06:03.728 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:03.728 21:52:14 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:03.728 21:52:14 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2007627 00:06:03.728 21:52:14 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.728 21:52:14 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:03.728 21:52:14 -- scheduler/scheduler.sh@37 -- # waitforlisten 2007627 00:06:03.728 21:52:14 -- common/autotest_common.sh@819 -- # '[' -z 2007627 ']' 00:06:03.728 21:52:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.728 21:52:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.728 21:52:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.728 21:52:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.728 21:52:14 -- common/autotest_common.sh@10 -- # set +x 00:06:03.728 [2024-07-26 21:52:14.818676] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.728 [2024-07-26 21:52:14.818730] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007627 ] 00:06:03.728 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.728 [2024-07-26 21:52:14.898033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.728 [2024-07-26 21:52:14.936143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.728 [2024-07-26 21:52:14.936237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.728 [2024-07-26 21:52:14.936322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.728 [2024-07-26 21:52:14.936324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.666 21:52:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.666 21:52:15 -- common/autotest_common.sh@852 -- # return 0 00:06:04.666 21:52:15 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:04.666 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.666 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.666 POWER: Env isn't set yet! 00:06:04.666 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:04.666 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:04.666 POWER: Cannot set governor of lcore 0 to userspace 00:06:04.666 POWER: Attempting to initialise PSTAT power management... 00:06:04.666 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:04.666 POWER: Initialized successfully for lcore 0 power management 00:06:04.666 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:04.666 POWER: Initialized successfully for lcore 1 power management 00:06:04.667 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:04.667 POWER: Initialized successfully for lcore 2 power management 00:06:04.667 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:04.667 POWER: Initialized successfully for lcore 3 power management 00:06:04.667 [2024-07-26 21:52:15.659879] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:04.667 [2024-07-26 21:52:15.659894] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:04.667 [2024-07-26 21:52:15.659903] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 [2024-07-26 21:52:15.723810] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:04.667 21:52:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.667 21:52:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 ************************************ 00:06:04.667 START TEST scheduler_create_thread 00:06:04.667 ************************************ 00:06:04.667 21:52:15 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 2 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 3 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 4 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 5 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 6 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 7 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 8 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 9 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 10 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:04.667 21:52:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.667 21:52:15 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:04.667 21:52:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.667 21:52:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.074 21:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:06.074 21:52:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:06.074 21:52:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:06.074 21:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:06.074 21:52:17 -- common/autotest_common.sh@10 -- # set +x 00:06:07.454 21:52:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:07.454 00:06:07.454 real 0m2.617s 00:06:07.454 user 0m0.012s 00:06:07.454 sys 0m0.004s 00:06:07.454 21:52:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.454 21:52:18 -- common/autotest_common.sh@10 -- # set +x 00:06:07.454 ************************************ 00:06:07.454 END TEST scheduler_create_thread 00:06:07.454 ************************************ 00:06:07.454 21:52:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:07.454 21:52:18 -- scheduler/scheduler.sh@46 -- # killprocess 2007627 00:06:07.454 21:52:18 -- common/autotest_common.sh@926 -- # '[' -z 2007627 ']' 00:06:07.454 21:52:18 -- common/autotest_common.sh@930 -- # kill -0 2007627 00:06:07.454 21:52:18 -- common/autotest_common.sh@931 -- # uname 00:06:07.454 21:52:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:07.454 21:52:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2007627 00:06:07.454 21:52:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:07.454 21:52:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:07.454 21:52:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2007627' 00:06:07.454 killing process with pid 2007627 00:06:07.454 21:52:18 -- common/autotest_common.sh@945 -- # kill 2007627 00:06:07.454 21:52:18 -- common/autotest_common.sh@950 -- # wait 2007627 00:06:07.713 [2024-07-26 21:52:18.829788] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:07.972 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:07.972 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:07.972 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:07.972 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:07.972 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:07.972 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:07.972 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:07.972 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:07.972 00:06:07.972 real 0m4.333s 00:06:07.972 user 0m8.194s 00:06:07.972 sys 0m0.413s 00:06:07.972 21:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.972 21:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:07.972 ************************************ 00:06:07.972 END TEST event_scheduler 00:06:07.972 ************************************ 00:06:07.972 21:52:19 -- event/event.sh@51 -- # modprobe -n nbd 00:06:07.972 21:52:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:07.972 21:52:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.972 21:52:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.973 21:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:07.973 ************************************ 00:06:07.973 START TEST app_repeat 00:06:07.973 ************************************ 00:06:07.973 21:52:19 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:07.973 21:52:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.973 21:52:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.973 21:52:19 -- event/event.sh@13 -- # local nbd_list 00:06:07.973 21:52:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:07.973 21:52:19 -- event/event.sh@14 -- # local bdev_list 00:06:07.973 21:52:19 -- event/event.sh@15 -- # local repeat_times=4 00:06:07.973 21:52:19 -- event/event.sh@17 -- # modprobe nbd 00:06:07.973 21:52:19 -- event/event.sh@19 -- # repeat_pid=2008482 00:06:07.973 21:52:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.973 21:52:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:07.973 21:52:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2008482' 00:06:07.973 Process app_repeat pid: 2008482 00:06:07.973 21:52:19 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.973 21:52:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:07.973 spdk_app_start Round 0 00:06:07.973 21:52:19 -- event/event.sh@25 -- # waitforlisten 2008482 /var/tmp/spdk-nbd.sock 00:06:07.973 21:52:19 -- common/autotest_common.sh@819 -- # '[' -z 2008482 ']' 00:06:07.973 21:52:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.973 21:52:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.973 21:52:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.973 21:52:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.973 21:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:07.973 [2024-07-26 21:52:19.103851] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:07.973 [2024-07-26 21:52:19.103922] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008482 ] 00:06:07.973 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.973 [2024-07-26 21:52:19.188237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.232 [2024-07-26 21:52:19.223823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.232 [2024-07-26 21:52:19.223826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.801 21:52:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.801 21:52:19 -- common/autotest_common.sh@852 -- # return 0 00:06:08.801 21:52:19 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.060 Malloc0 00:06:09.060 21:52:20 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.060 Malloc1 00:06:09.060 21:52:20 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@12 -- # local i 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.060 21:52:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:09.319 /dev/nbd0 00:06:09.319 21:52:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.319 21:52:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.319 21:52:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:09.319 21:52:20 -- common/autotest_common.sh@857 -- # local i 00:06:09.319 21:52:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.319 21:52:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.319 21:52:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:09.319 21:52:20 -- common/autotest_common.sh@861 -- # break 00:06:09.319 21:52:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.319 21:52:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.319 21:52:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.319 1+0 records in 00:06:09.319 1+0 records out 00:06:09.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021556 s, 19.0 MB/s 00:06:09.319 21:52:20 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:09.319 21:52:20 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.319 21:52:20 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:09.319 21:52:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.319 21:52:20 -- common/autotest_common.sh@877 -- # return 0 00:06:09.319 21:52:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.319 21:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.319 21:52:20 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.578 /dev/nbd1 00:06:09.578 21:52:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.578 21:52:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.578 21:52:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:09.578 21:52:20 -- common/autotest_common.sh@857 -- # local i 00:06:09.578 21:52:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:09.578 21:52:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:09.578 21:52:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:09.578 21:52:20 -- common/autotest_common.sh@861 -- # break 00:06:09.578 21:52:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:09.578 21:52:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:09.578 21:52:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.578 1+0 records in 00:06:09.578 1+0 records out 00:06:09.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199473 s, 20.5 MB/s 00:06:09.578 21:52:20 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:09.578 21:52:20 -- common/autotest_common.sh@874 -- # size=4096 00:06:09.578 21:52:20 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:09.578 21:52:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:09.578 21:52:20 -- common/autotest_common.sh@877 -- # return 0 00:06:09.578 21:52:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.578 21:52:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.579 21:52:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.579 21:52:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.579 21:52:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.838 { 00:06:09.838 "nbd_device": "/dev/nbd0", 00:06:09.838 "bdev_name": "Malloc0" 00:06:09.838 }, 00:06:09.838 { 00:06:09.838 "nbd_device": "/dev/nbd1", 00:06:09.838 "bdev_name": "Malloc1" 00:06:09.838 } 00:06:09.838 ]' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.838 { 00:06:09.838 "nbd_device": "/dev/nbd0", 00:06:09.838 "bdev_name": "Malloc0" 00:06:09.838 }, 00:06:09.838 { 00:06:09.838 "nbd_device": "/dev/nbd1", 00:06:09.838 "bdev_name": "Malloc1" 00:06:09.838 } 00:06:09.838 ]' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.838 /dev/nbd1' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.838 /dev/nbd1' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.838 256+0 records in 00:06:09.838 256+0 records out 00:06:09.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457564 s, 229 MB/s 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.838 256+0 records in 00:06:09.838 256+0 records out 00:06:09.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016756 s, 62.6 MB/s 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.838 21:52:20 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.838 256+0 records in 00:06:09.838 256+0 records out 00:06:09.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209725 s, 50.0 MB/s 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.839 21:52:20 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@41 -- # break 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.098 21:52:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@41 -- # break 00:06:10.357 21:52:21 -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.358 21:52:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.358 21:52:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.358 21:52:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.358 21:52:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.358 21:52:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.358 21:52:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@65 -- # true 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.617 21:52:21 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.617 21:52:21 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.617 21:52:21 -- event/event.sh@35 -- # sleep 3 00:06:10.876 [2024-07-26 21:52:21.972243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.876 [2024-07-26 21:52:22.004578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.876 [2024-07-26 21:52:22.004580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.876 [2024-07-26 21:52:22.045647] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.876 [2024-07-26 21:52:22.045690] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.166 21:52:24 -- event/event.sh@23 -- # for i in {0..2} 00:06:14.166 21:52:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.166 spdk_app_start Round 1 00:06:14.166 21:52:24 -- event/event.sh@25 -- # waitforlisten 2008482 /var/tmp/spdk-nbd.sock 00:06:14.166 21:52:24 -- common/autotest_common.sh@819 -- # '[' -z 2008482 ']' 00:06:14.166 21:52:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.166 21:52:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.166 21:52:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.166 21:52:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.166 21:52:24 -- common/autotest_common.sh@10 -- # set +x 00:06:14.166 21:52:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.166 21:52:24 -- common/autotest_common.sh@852 -- # return 0 00:06:14.166 21:52:24 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.166 Malloc0 00:06:14.166 21:52:25 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.166 Malloc1 00:06:14.166 21:52:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.166 21:52:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.426 /dev/nbd0 00:06:14.426 21:52:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.426 21:52:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.426 21:52:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:14.426 21:52:25 -- common/autotest_common.sh@857 -- # local i 00:06:14.426 21:52:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.426 21:52:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.426 21:52:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:14.426 21:52:25 -- common/autotest_common.sh@861 -- # break 00:06:14.426 21:52:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.426 21:52:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.426 21:52:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.426 1+0 records in 00:06:14.426 1+0 records out 00:06:14.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220153 s, 18.6 MB/s 00:06:14.426 21:52:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:14.426 21:52:25 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.426 21:52:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:14.426 21:52:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.426 21:52:25 -- common/autotest_common.sh@877 -- # return 0 00:06:14.426 21:52:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.426 21:52:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.426 21:52:25 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.685 /dev/nbd1 00:06:14.685 21:52:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.685 21:52:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.685 21:52:25 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:14.685 21:52:25 -- common/autotest_common.sh@857 -- # local i 00:06:14.685 21:52:25 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:14.685 21:52:25 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:14.685 21:52:25 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:14.685 21:52:25 -- common/autotest_common.sh@861 -- # break 00:06:14.685 21:52:25 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:14.685 21:52:25 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:14.685 21:52:25 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.685 1+0 records in 00:06:14.685 1+0 records out 00:06:14.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170104 s, 24.1 MB/s 00:06:14.685 21:52:25 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:14.685 21:52:25 -- common/autotest_common.sh@874 -- # size=4096 00:06:14.685 21:52:25 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:14.685 21:52:25 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:14.685 21:52:25 -- common/autotest_common.sh@877 -- # return 0 00:06:14.685 21:52:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.685 21:52:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.685 21:52:25 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.686 21:52:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.686 21:52:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.686 21:52:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:14.686 { 00:06:14.686 "nbd_device": "/dev/nbd0", 00:06:14.686 "bdev_name": "Malloc0" 00:06:14.686 }, 00:06:14.686 { 00:06:14.686 "nbd_device": "/dev/nbd1", 00:06:14.686 "bdev_name": "Malloc1" 00:06:14.686 } 00:06:14.686 ]' 00:06:14.686 21:52:25 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:14.686 { 00:06:14.686 "nbd_device": "/dev/nbd0", 00:06:14.686 "bdev_name": "Malloc0" 00:06:14.686 }, 00:06:14.686 { 00:06:14.686 "nbd_device": "/dev/nbd1", 00:06:14.686 "bdev_name": "Malloc1" 00:06:14.686 } 00:06:14.686 ]' 00:06:14.686 21:52:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:14.945 /dev/nbd1' 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:14.945 /dev/nbd1' 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@65 -- # count=2 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@95 -- # count=2 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:14.945 256+0 records in 00:06:14.945 256+0 records out 00:06:14.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011458 s, 91.5 MB/s 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:14.945 256+0 records in 00:06:14.945 256+0 records out 00:06:14.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194534 s, 53.9 MB/s 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:14.945 21:52:25 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:14.945 256+0 records in 00:06:14.945 256+0 records out 00:06:14.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207883 s, 50.4 MB/s 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@51 -- # local i 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:14.945 21:52:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@41 -- # break 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@41 -- # break 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.205 21:52:26 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@65 -- # true 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.463 21:52:26 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.463 21:52:26 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.721 21:52:26 -- event/event.sh@35 -- # sleep 3 00:06:15.980 [2024-07-26 21:52:27.000401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.980 [2024-07-26 21:52:27.032410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.980 [2024-07-26 21:52:27.032414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.980 [2024-07-26 21:52:27.073393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.980 [2024-07-26 21:52:27.073442] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.272 21:52:29 -- event/event.sh@23 -- # for i in {0..2} 00:06:19.272 21:52:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.272 spdk_app_start Round 2 00:06:19.272 21:52:29 -- event/event.sh@25 -- # waitforlisten 2008482 /var/tmp/spdk-nbd.sock 00:06:19.272 21:52:29 -- common/autotest_common.sh@819 -- # '[' -z 2008482 ']' 00:06:19.272 21:52:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.272 21:52:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.272 21:52:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.272 21:52:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.272 21:52:29 -- common/autotest_common.sh@10 -- # set +x 00:06:19.272 21:52:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.272 21:52:29 -- common/autotest_common.sh@852 -- # return 0 00:06:19.272 21:52:29 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.272 Malloc0 00:06:19.272 21:52:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.272 Malloc1 00:06:19.272 21:52:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@12 -- # local i 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.272 21:52:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.531 /dev/nbd0 00:06:19.531 21:52:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.531 21:52:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.532 21:52:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:19.532 21:52:30 -- common/autotest_common.sh@857 -- # local i 00:06:19.532 21:52:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:19.532 21:52:30 -- common/autotest_common.sh@861 -- # break 00:06:19.532 21:52:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.532 1+0 records in 00:06:19.532 1+0 records out 00:06:19.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228796 s, 17.9 MB/s 00:06:19.532 21:52:30 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.532 21:52:30 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.532 21:52:30 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.532 21:52:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.532 21:52:30 -- common/autotest_common.sh@877 -- # return 0 00:06:19.532 21:52:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.532 21:52:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.532 21:52:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.532 /dev/nbd1 00:06:19.532 21:52:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.532 21:52:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.532 21:52:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:19.532 21:52:30 -- common/autotest_common.sh@857 -- # local i 00:06:19.532 21:52:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:19.532 21:52:30 -- common/autotest_common.sh@861 -- # break 00:06:19.532 21:52:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.532 21:52:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.532 1+0 records in 00:06:19.532 1+0 records out 00:06:19.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221266 s, 18.5 MB/s 00:06:19.532 21:52:30 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.532 21:52:30 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.532 21:52:30 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:19.532 21:52:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.532 21:52:30 -- common/autotest_common.sh@877 -- # return 0 00:06:19.532 21:52:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.791 { 00:06:19.791 "nbd_device": "/dev/nbd0", 00:06:19.791 "bdev_name": "Malloc0" 00:06:19.791 }, 00:06:19.791 { 00:06:19.791 "nbd_device": "/dev/nbd1", 00:06:19.791 "bdev_name": "Malloc1" 00:06:19.791 } 00:06:19.791 ]' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.791 { 00:06:19.791 "nbd_device": "/dev/nbd0", 00:06:19.791 "bdev_name": "Malloc0" 00:06:19.791 }, 00:06:19.791 { 00:06:19.791 "nbd_device": "/dev/nbd1", 00:06:19.791 "bdev_name": "Malloc1" 00:06:19.791 } 00:06:19.791 ]' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.791 /dev/nbd1' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.791 /dev/nbd1' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.791 256+0 records in 00:06:19.791 256+0 records out 00:06:19.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107352 s, 97.7 MB/s 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.791 21:52:30 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.050 256+0 records in 00:06:20.050 256+0 records out 00:06:20.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202192 s, 51.9 MB/s 00:06:20.050 21:52:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.050 21:52:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.050 256+0 records in 00:06:20.050 256+0 records out 00:06:20.050 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198485 s, 52.8 MB/s 00:06:20.050 21:52:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.050 21:52:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.050 21:52:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.050 21:52:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@51 -- # local i 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@41 -- # break 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.051 21:52:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@41 -- # break 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.309 21:52:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@65 -- # true 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.568 21:52:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.569 21:52:31 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.569 21:52:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.569 21:52:31 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.569 21:52:31 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.828 21:52:31 -- event/event.sh@35 -- # sleep 3 00:06:20.828 [2024-07-26 21:52:32.036212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.088 [2024-07-26 21:52:32.068442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.088 [2024-07-26 21:52:32.068444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.088 [2024-07-26 21:52:32.109520] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.088 [2024-07-26 21:52:32.109564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.409 21:52:34 -- event/event.sh@38 -- # waitforlisten 2008482 /var/tmp/spdk-nbd.sock 00:06:24.409 21:52:34 -- common/autotest_common.sh@819 -- # '[' -z 2008482 ']' 00:06:24.409 21:52:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.409 21:52:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.409 21:52:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.409 21:52:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.409 21:52:34 -- common/autotest_common.sh@10 -- # set +x 00:06:24.409 21:52:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.409 21:52:35 -- common/autotest_common.sh@852 -- # return 0 00:06:24.409 21:52:35 -- event/event.sh@39 -- # killprocess 2008482 00:06:24.409 21:52:35 -- common/autotest_common.sh@926 -- # '[' -z 2008482 ']' 00:06:24.409 21:52:35 -- common/autotest_common.sh@930 -- # kill -0 2008482 00:06:24.409 21:52:35 -- common/autotest_common.sh@931 -- # uname 00:06:24.409 21:52:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.409 21:52:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2008482 00:06:24.409 21:52:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.409 21:52:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.409 21:52:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2008482' 00:06:24.409 killing process with pid 2008482 00:06:24.409 21:52:35 -- common/autotest_common.sh@945 -- # kill 2008482 00:06:24.409 21:52:35 -- common/autotest_common.sh@950 -- # wait 2008482 00:06:24.409 spdk_app_start is called in Round 0. 00:06:24.409 Shutdown signal received, stop current app iteration 00:06:24.409 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:24.409 spdk_app_start is called in Round 1. 00:06:24.409 Shutdown signal received, stop current app iteration 00:06:24.409 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:24.409 spdk_app_start is called in Round 2. 00:06:24.409 Shutdown signal received, stop current app iteration 00:06:24.409 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:24.409 spdk_app_start is called in Round 3. 00:06:24.409 Shutdown signal received, stop current app iteration 00:06:24.409 21:52:35 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.409 21:52:35 -- event/event.sh@42 -- # return 0 00:06:24.409 00:06:24.409 real 0m16.163s 00:06:24.409 user 0m34.337s 00:06:24.409 sys 0m3.076s 00:06:24.409 21:52:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.409 21:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.409 ************************************ 00:06:24.409 END TEST app_repeat 00:06:24.409 ************************************ 00:06:24.409 21:52:35 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.409 21:52:35 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.409 21:52:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.409 21:52:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.409 21:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.409 ************************************ 00:06:24.409 START TEST cpu_locks 00:06:24.409 ************************************ 00:06:24.409 21:52:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:24.409 * Looking for test storage... 00:06:24.409 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:24.409 21:52:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.409 21:52:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.409 21:52:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.409 21:52:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.409 21:52:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.409 21:52:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.409 21:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.409 ************************************ 00:06:24.409 START TEST default_locks 00:06:24.409 ************************************ 00:06:24.409 21:52:35 -- common/autotest_common.sh@1104 -- # default_locks 00:06:24.409 21:52:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2011423 00:06:24.409 21:52:35 -- event/cpu_locks.sh@47 -- # waitforlisten 2011423 00:06:24.409 21:52:35 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.409 21:52:35 -- common/autotest_common.sh@819 -- # '[' -z 2011423 ']' 00:06:24.409 21:52:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.409 21:52:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.409 21:52:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.409 21:52:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.409 21:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:24.409 [2024-07-26 21:52:35.448229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:24.409 [2024-07-26 21:52:35.448286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011423 ] 00:06:24.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.409 [2024-07-26 21:52:35.533533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.409 [2024-07-26 21:52:35.570860] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.409 [2024-07-26 21:52:35.570979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.346 21:52:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.346 21:52:36 -- common/autotest_common.sh@852 -- # return 0 00:06:25.346 21:52:36 -- event/cpu_locks.sh@49 -- # locks_exist 2011423 00:06:25.346 21:52:36 -- event/cpu_locks.sh@22 -- # lslocks -p 2011423 00:06:25.346 21:52:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.605 lslocks: write error 00:06:25.605 21:52:36 -- event/cpu_locks.sh@50 -- # killprocess 2011423 00:06:25.605 21:52:36 -- common/autotest_common.sh@926 -- # '[' -z 2011423 ']' 00:06:25.605 21:52:36 -- common/autotest_common.sh@930 -- # kill -0 2011423 00:06:25.605 21:52:36 -- common/autotest_common.sh@931 -- # uname 00:06:25.605 21:52:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.605 21:52:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2011423 00:06:25.605 21:52:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.605 21:52:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.605 21:52:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2011423' 00:06:25.605 killing process with pid 2011423 00:06:25.605 21:52:36 -- common/autotest_common.sh@945 -- # kill 2011423 00:06:25.605 21:52:36 -- common/autotest_common.sh@950 -- # wait 2011423 00:06:26.174 21:52:37 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2011423 00:06:26.174 21:52:37 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.174 21:52:37 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2011423 00:06:26.174 21:52:37 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:26.174 21:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.174 21:52:37 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:26.174 21:52:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.174 21:52:37 -- common/autotest_common.sh@643 -- # waitforlisten 2011423 00:06:26.174 21:52:37 -- common/autotest_common.sh@819 -- # '[' -z 2011423 ']' 00:06:26.174 21:52:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.174 21:52:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.174 21:52:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.174 21:52:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.174 21:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:26.174 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2011423) - No such process 00:06:26.174 ERROR: process (pid: 2011423) is no longer running 00:06:26.174 21:52:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.174 21:52:37 -- common/autotest_common.sh@852 -- # return 1 00:06:26.175 21:52:37 -- common/autotest_common.sh@643 -- # es=1 00:06:26.175 21:52:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.175 21:52:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.175 21:52:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.175 21:52:37 -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.175 21:52:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.175 21:52:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.175 21:52:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.175 00:06:26.175 real 0m1.712s 00:06:26.175 user 0m1.762s 00:06:26.175 sys 0m0.613s 00:06:26.175 21:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.175 21:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:26.175 ************************************ 00:06:26.175 END TEST default_locks 00:06:26.175 ************************************ 00:06:26.175 21:52:37 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.175 21:52:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.175 21:52:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.175 21:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:26.175 ************************************ 00:06:26.175 START TEST default_locks_via_rpc 00:06:26.175 ************************************ 00:06:26.175 21:52:37 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:26.175 21:52:37 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2011810 00:06:26.175 21:52:37 -- event/cpu_locks.sh@63 -- # waitforlisten 2011810 00:06:26.175 21:52:37 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.175 21:52:37 -- common/autotest_common.sh@819 -- # '[' -z 2011810 ']' 00:06:26.175 21:52:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.175 21:52:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.175 21:52:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.175 21:52:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.175 21:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:26.175 [2024-07-26 21:52:37.211201] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:26.175 [2024-07-26 21:52:37.211257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2011810 ] 00:06:26.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.175 [2024-07-26 21:52:37.296047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.175 [2024-07-26 21:52:37.333731] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.175 [2024-07-26 21:52:37.333851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.113 21:52:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.113 21:52:37 -- common/autotest_common.sh@852 -- # return 0 00:06:27.113 21:52:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.113 21:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:27.113 21:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:27.113 21:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:27.113 21:52:38 -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.113 21:52:38 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.113 21:52:38 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.113 21:52:38 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.113 21:52:38 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.113 21:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:27.113 21:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.113 21:52:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:27.113 21:52:38 -- event/cpu_locks.sh@71 -- # locks_exist 2011810 00:06:27.113 21:52:38 -- event/cpu_locks.sh@22 -- # lslocks -p 2011810 00:06:27.113 21:52:38 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.373 21:52:38 -- event/cpu_locks.sh@73 -- # killprocess 2011810 00:06:27.373 21:52:38 -- common/autotest_common.sh@926 -- # '[' -z 2011810 ']' 00:06:27.373 21:52:38 -- common/autotest_common.sh@930 -- # kill -0 2011810 00:06:27.373 21:52:38 -- common/autotest_common.sh@931 -- # uname 00:06:27.373 21:52:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.373 21:52:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2011810 00:06:27.373 21:52:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.373 21:52:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.373 21:52:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2011810' 00:06:27.373 killing process with pid 2011810 00:06:27.373 21:52:38 -- common/autotest_common.sh@945 -- # kill 2011810 00:06:27.373 21:52:38 -- common/autotest_common.sh@950 -- # wait 2011810 00:06:27.632 00:06:27.632 real 0m1.589s 00:06:27.632 user 0m1.621s 00:06:27.632 sys 0m0.596s 00:06:27.632 21:52:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.632 21:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.632 ************************************ 00:06:27.632 END TEST default_locks_via_rpc 00:06:27.632 ************************************ 00:06:27.632 21:52:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.632 21:52:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.632 21:52:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.632 21:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.632 ************************************ 00:06:27.632 START TEST non_locking_app_on_locked_coremask 00:06:27.632 ************************************ 00:06:27.632 21:52:38 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:27.632 21:52:38 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.632 21:52:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2012154 00:06:27.632 21:52:38 -- event/cpu_locks.sh@81 -- # waitforlisten 2012154 /var/tmp/spdk.sock 00:06:27.632 21:52:38 -- common/autotest_common.sh@819 -- # '[' -z 2012154 ']' 00:06:27.632 21:52:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.632 21:52:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.632 21:52:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.632 21:52:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.632 21:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:27.632 [2024-07-26 21:52:38.831258] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:27.632 [2024-07-26 21:52:38.831314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012154 ] 00:06:27.892 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.892 [2024-07-26 21:52:38.915548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.892 [2024-07-26 21:52:38.952206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.892 [2024-07-26 21:52:38.952327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.458 21:52:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.458 21:52:39 -- common/autotest_common.sh@852 -- # return 0 00:06:28.458 21:52:39 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2012292 00:06:28.458 21:52:39 -- event/cpu_locks.sh@85 -- # waitforlisten 2012292 /var/tmp/spdk2.sock 00:06:28.458 21:52:39 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.458 21:52:39 -- common/autotest_common.sh@819 -- # '[' -z 2012292 ']' 00:06:28.458 21:52:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.458 21:52:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.458 21:52:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.458 21:52:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.458 21:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:28.458 [2024-07-26 21:52:39.681391] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:28.458 [2024-07-26 21:52:39.681443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012292 ] 00:06:28.716 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.716 [2024-07-26 21:52:39.799233] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.716 [2024-07-26 21:52:39.799263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.716 [2024-07-26 21:52:39.871361] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.716 [2024-07-26 21:52:39.871473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.283 21:52:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.283 21:52:40 -- common/autotest_common.sh@852 -- # return 0 00:06:29.283 21:52:40 -- event/cpu_locks.sh@87 -- # locks_exist 2012154 00:06:29.283 21:52:40 -- event/cpu_locks.sh@22 -- # lslocks -p 2012154 00:06:29.283 21:52:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.661 lslocks: write error 00:06:30.661 21:52:41 -- event/cpu_locks.sh@89 -- # killprocess 2012154 00:06:30.661 21:52:41 -- common/autotest_common.sh@926 -- # '[' -z 2012154 ']' 00:06:30.661 21:52:41 -- common/autotest_common.sh@930 -- # kill -0 2012154 00:06:30.661 21:52:41 -- common/autotest_common.sh@931 -- # uname 00:06:30.661 21:52:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:30.661 21:52:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2012154 00:06:30.661 21:52:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:30.661 21:52:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:30.661 21:52:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2012154' 00:06:30.661 killing process with pid 2012154 00:06:30.661 21:52:41 -- common/autotest_common.sh@945 -- # kill 2012154 00:06:30.661 21:52:41 -- common/autotest_common.sh@950 -- # wait 2012154 00:06:31.229 21:52:42 -- event/cpu_locks.sh@90 -- # killprocess 2012292 00:06:31.229 21:52:42 -- common/autotest_common.sh@926 -- # '[' -z 2012292 ']' 00:06:31.229 21:52:42 -- common/autotest_common.sh@930 -- # kill -0 2012292 00:06:31.229 21:52:42 -- common/autotest_common.sh@931 -- # uname 00:06:31.229 21:52:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.229 21:52:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2012292 00:06:31.487 21:52:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.487 21:52:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.487 21:52:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2012292' 00:06:31.487 killing process with pid 2012292 00:06:31.487 21:52:42 -- common/autotest_common.sh@945 -- # kill 2012292 00:06:31.487 21:52:42 -- common/autotest_common.sh@950 -- # wait 2012292 00:06:31.747 00:06:31.747 real 0m3.973s 00:06:31.747 user 0m4.214s 00:06:31.747 sys 0m1.370s 00:06:31.747 21:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.747 21:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.747 ************************************ 00:06:31.747 END TEST non_locking_app_on_locked_coremask 00:06:31.747 ************************************ 00:06:31.747 21:52:42 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:31.747 21:52:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:31.747 21:52:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.747 21:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.747 ************************************ 00:06:31.747 START TEST locking_app_on_unlocked_coremask 00:06:31.747 ************************************ 00:06:31.747 21:52:42 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:31.747 21:52:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2012875 00:06:31.747 21:52:42 -- event/cpu_locks.sh@99 -- # waitforlisten 2012875 /var/tmp/spdk.sock 00:06:31.747 21:52:42 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:31.747 21:52:42 -- common/autotest_common.sh@819 -- # '[' -z 2012875 ']' 00:06:31.747 21:52:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.747 21:52:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.747 21:52:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.747 21:52:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.747 21:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:31.747 [2024-07-26 21:52:42.867629] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:31.747 [2024-07-26 21:52:42.867685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012875 ] 00:06:31.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.747 [2024-07-26 21:52:42.953531] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.747 [2024-07-26 21:52:42.953555] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.006 [2024-07-26 21:52:42.991056] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.006 [2024-07-26 21:52:42.991170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.571 21:52:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.571 21:52:43 -- common/autotest_common.sh@852 -- # return 0 00:06:32.571 21:52:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2013077 00:06:32.571 21:52:43 -- event/cpu_locks.sh@103 -- # waitforlisten 2013077 /var/tmp/spdk2.sock 00:06:32.571 21:52:43 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.571 21:52:43 -- common/autotest_common.sh@819 -- # '[' -z 2013077 ']' 00:06:32.571 21:52:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.571 21:52:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.571 21:52:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.571 21:52:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.571 21:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:32.571 [2024-07-26 21:52:43.695675] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:32.571 [2024-07-26 21:52:43.695743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013077 ] 00:06:32.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.830 [2024-07-26 21:52:43.817590] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.830 [2024-07-26 21:52:43.890354] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.830 [2024-07-26 21:52:43.890483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.399 21:52:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.399 21:52:44 -- common/autotest_common.sh@852 -- # return 0 00:06:33.399 21:52:44 -- event/cpu_locks.sh@105 -- # locks_exist 2013077 00:06:33.399 21:52:44 -- event/cpu_locks.sh@22 -- # lslocks -p 2013077 00:06:33.399 21:52:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:34.335 lslocks: write error 00:06:34.335 21:52:45 -- event/cpu_locks.sh@107 -- # killprocess 2012875 00:06:34.335 21:52:45 -- common/autotest_common.sh@926 -- # '[' -z 2012875 ']' 00:06:34.335 21:52:45 -- common/autotest_common.sh@930 -- # kill -0 2012875 00:06:34.335 21:52:45 -- common/autotest_common.sh@931 -- # uname 00:06:34.335 21:52:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.335 21:52:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2012875 00:06:34.335 21:52:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.335 21:52:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.336 21:52:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2012875' 00:06:34.336 killing process with pid 2012875 00:06:34.336 21:52:45 -- common/autotest_common.sh@945 -- # kill 2012875 00:06:34.336 21:52:45 -- common/autotest_common.sh@950 -- # wait 2012875 00:06:34.904 21:52:45 -- event/cpu_locks.sh@108 -- # killprocess 2013077 00:06:34.904 21:52:45 -- common/autotest_common.sh@926 -- # '[' -z 2013077 ']' 00:06:34.904 21:52:45 -- common/autotest_common.sh@930 -- # kill -0 2013077 00:06:34.904 21:52:45 -- common/autotest_common.sh@931 -- # uname 00:06:34.904 21:52:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.904 21:52:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2013077 00:06:34.904 21:52:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.904 21:52:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.904 21:52:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2013077' 00:06:34.905 killing process with pid 2013077 00:06:34.905 21:52:46 -- common/autotest_common.sh@945 -- # kill 2013077 00:06:34.905 21:52:46 -- common/autotest_common.sh@950 -- # wait 2013077 00:06:35.164 00:06:35.164 real 0m3.483s 00:06:35.164 user 0m3.705s 00:06:35.164 sys 0m1.178s 00:06:35.164 21:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.164 21:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.164 ************************************ 00:06:35.164 END TEST locking_app_on_unlocked_coremask 00:06:35.164 ************************************ 00:06:35.164 21:52:46 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:35.164 21:52:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.164 21:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.164 21:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.164 ************************************ 00:06:35.164 START TEST locking_app_on_locked_coremask 00:06:35.164 ************************************ 00:06:35.164 21:52:46 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:35.164 21:52:46 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2013460 00:06:35.164 21:52:46 -- event/cpu_locks.sh@116 -- # waitforlisten 2013460 /var/tmp/spdk.sock 00:06:35.164 21:52:46 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.164 21:52:46 -- common/autotest_common.sh@819 -- # '[' -z 2013460 ']' 00:06:35.164 21:52:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.164 21:52:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.164 21:52:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.164 21:52:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.164 21:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:35.423 [2024-07-26 21:52:46.402482] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:35.423 [2024-07-26 21:52:46.402539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013460 ] 00:06:35.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.423 [2024-07-26 21:52:46.487634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.423 [2024-07-26 21:52:46.525122] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.423 [2024-07-26 21:52:46.525247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.991 21:52:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.991 21:52:47 -- common/autotest_common.sh@852 -- # return 0 00:06:35.991 21:52:47 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2013721 00:06:35.991 21:52:47 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2013721 /var/tmp/spdk2.sock 00:06:35.991 21:52:47 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:35.991 21:52:47 -- common/autotest_common.sh@640 -- # local es=0 00:06:35.991 21:52:47 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2013721 /var/tmp/spdk2.sock 00:06:35.991 21:52:47 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:35.991 21:52:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.991 21:52:47 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:35.991 21:52:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:35.991 21:52:47 -- common/autotest_common.sh@643 -- # waitforlisten 2013721 /var/tmp/spdk2.sock 00:06:35.991 21:52:47 -- common/autotest_common.sh@819 -- # '[' -z 2013721 ']' 00:06:35.991 21:52:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.991 21:52:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.991 21:52:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.991 21:52:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.991 21:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:36.250 [2024-07-26 21:52:47.232565] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.250 [2024-07-26 21:52:47.232620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013721 ] 00:06:36.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.250 [2024-07-26 21:52:47.350622] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2013460 has claimed it. 00:06:36.250 [2024-07-26 21:52:47.350679] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:36.819 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2013721) - No such process 00:06:36.819 ERROR: process (pid: 2013721) is no longer running 00:06:36.819 21:52:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.819 21:52:47 -- common/autotest_common.sh@852 -- # return 1 00:06:36.819 21:52:47 -- common/autotest_common.sh@643 -- # es=1 00:06:36.819 21:52:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.819 21:52:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.819 21:52:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.819 21:52:47 -- event/cpu_locks.sh@122 -- # locks_exist 2013460 00:06:36.819 21:52:47 -- event/cpu_locks.sh@22 -- # lslocks -p 2013460 00:06:36.819 21:52:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.079 lslocks: write error 00:06:37.079 21:52:48 -- event/cpu_locks.sh@124 -- # killprocess 2013460 00:06:37.079 21:52:48 -- common/autotest_common.sh@926 -- # '[' -z 2013460 ']' 00:06:37.079 21:52:48 -- common/autotest_common.sh@930 -- # kill -0 2013460 00:06:37.079 21:52:48 -- common/autotest_common.sh@931 -- # uname 00:06:37.079 21:52:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.079 21:52:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2013460 00:06:37.079 21:52:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.079 21:52:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.079 21:52:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2013460' 00:06:37.079 killing process with pid 2013460 00:06:37.079 21:52:48 -- common/autotest_common.sh@945 -- # kill 2013460 00:06:37.079 21:52:48 -- common/autotest_common.sh@950 -- # wait 2013460 00:06:37.338 00:06:37.338 real 0m2.153s 00:06:37.338 user 0m2.328s 00:06:37.338 sys 0m0.672s 00:06:37.338 21:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.338 21:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:37.338 ************************************ 00:06:37.338 END TEST locking_app_on_locked_coremask 00:06:37.338 ************************************ 00:06:37.338 21:52:48 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:37.338 21:52:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.338 21:52:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.338 21:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:37.338 ************************************ 00:06:37.338 START TEST locking_overlapped_coremask 00:06:37.338 ************************************ 00:06:37.338 21:52:48 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:37.338 21:52:48 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2014016 00:06:37.338 21:52:48 -- event/cpu_locks.sh@133 -- # waitforlisten 2014016 /var/tmp/spdk.sock 00:06:37.338 21:52:48 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:37.338 21:52:48 -- common/autotest_common.sh@819 -- # '[' -z 2014016 ']' 00:06:37.338 21:52:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.338 21:52:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.338 21:52:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.338 21:52:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.338 21:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:37.598 [2024-07-26 21:52:48.606885] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.598 [2024-07-26 21:52:48.606943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014016 ] 00:06:37.598 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.598 [2024-07-26 21:52:48.691972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.598 [2024-07-26 21:52:48.730768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.598 [2024-07-26 21:52:48.730905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.598 [2024-07-26 21:52:48.730998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.598 [2024-07-26 21:52:48.731001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.536 21:52:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.536 21:52:49 -- common/autotest_common.sh@852 -- # return 0 00:06:38.536 21:52:49 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2014036 00:06:38.536 21:52:49 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2014036 /var/tmp/spdk2.sock 00:06:38.536 21:52:49 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:38.536 21:52:49 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.536 21:52:49 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2014036 /var/tmp/spdk2.sock 00:06:38.536 21:52:49 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:38.536 21:52:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.536 21:52:49 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:38.536 21:52:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.536 21:52:49 -- common/autotest_common.sh@643 -- # waitforlisten 2014036 /var/tmp/spdk2.sock 00:06:38.536 21:52:49 -- common/autotest_common.sh@819 -- # '[' -z 2014036 ']' 00:06:38.536 21:52:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.536 21:52:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.536 21:52:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.536 21:52:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.536 21:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:38.536 [2024-07-26 21:52:49.456940] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:38.536 [2024-07-26 21:52:49.456996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014036 ] 00:06:38.536 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.536 [2024-07-26 21:52:49.582323] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2014016 has claimed it. 00:06:38.536 [2024-07-26 21:52:49.582364] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.105 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2014036) - No such process 00:06:39.105 ERROR: process (pid: 2014036) is no longer running 00:06:39.105 21:52:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.105 21:52:50 -- common/autotest_common.sh@852 -- # return 1 00:06:39.105 21:52:50 -- common/autotest_common.sh@643 -- # es=1 00:06:39.106 21:52:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.106 21:52:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.106 21:52:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.106 21:52:50 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:39.106 21:52:50 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:39.106 21:52:50 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:39.106 21:52:50 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:39.106 21:52:50 -- event/cpu_locks.sh@141 -- # killprocess 2014016 00:06:39.106 21:52:50 -- common/autotest_common.sh@926 -- # '[' -z 2014016 ']' 00:06:39.106 21:52:50 -- common/autotest_common.sh@930 -- # kill -0 2014016 00:06:39.106 21:52:50 -- common/autotest_common.sh@931 -- # uname 00:06:39.106 21:52:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.106 21:52:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2014016 00:06:39.106 21:52:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.106 21:52:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.106 21:52:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2014016' 00:06:39.106 killing process with pid 2014016 00:06:39.106 21:52:50 -- common/autotest_common.sh@945 -- # kill 2014016 00:06:39.106 21:52:50 -- common/autotest_common.sh@950 -- # wait 2014016 00:06:39.365 00:06:39.365 real 0m1.880s 00:06:39.365 user 0m5.283s 00:06:39.365 sys 0m0.499s 00:06:39.365 21:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.365 21:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.365 ************************************ 00:06:39.366 END TEST locking_overlapped_coremask 00:06:39.366 ************************************ 00:06:39.366 21:52:50 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:39.366 21:52:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.366 21:52:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.366 21:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 ************************************ 00:06:39.366 START TEST locking_overlapped_coremask_via_rpc 00:06:39.366 ************************************ 00:06:39.366 21:52:50 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:39.366 21:52:50 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2014325 00:06:39.366 21:52:50 -- event/cpu_locks.sh@149 -- # waitforlisten 2014325 /var/tmp/spdk.sock 00:06:39.366 21:52:50 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:39.366 21:52:50 -- common/autotest_common.sh@819 -- # '[' -z 2014325 ']' 00:06:39.366 21:52:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.366 21:52:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.366 21:52:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.366 21:52:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.366 21:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:39.366 [2024-07-26 21:52:50.529828] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:39.366 [2024-07-26 21:52:50.529882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014325 ] 00:06:39.366 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.655 [2024-07-26 21:52:50.615426] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.655 [2024-07-26 21:52:50.615454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.655 [2024-07-26 21:52:50.654925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.655 [2024-07-26 21:52:50.655063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.655 [2024-07-26 21:52:50.655973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.655 [2024-07-26 21:52:50.655976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.223 21:52:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.223 21:52:51 -- common/autotest_common.sh@852 -- # return 0 00:06:40.223 21:52:51 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:40.223 21:52:51 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2014492 00:06:40.223 21:52:51 -- event/cpu_locks.sh@153 -- # waitforlisten 2014492 /var/tmp/spdk2.sock 00:06:40.223 21:52:51 -- common/autotest_common.sh@819 -- # '[' -z 2014492 ']' 00:06:40.223 21:52:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.223 21:52:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.223 21:52:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.223 21:52:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.223 21:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:40.223 [2024-07-26 21:52:51.375085] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:40.223 [2024-07-26 21:52:51.375143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014492 ] 00:06:40.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.483 [2024-07-26 21:52:51.496431] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.483 [2024-07-26 21:52:51.496459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.483 [2024-07-26 21:52:51.575219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.483 [2024-07-26 21:52:51.575425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.483 [2024-07-26 21:52:51.578673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.483 [2024-07-26 21:52:51.578674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:41.051 21:52:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.051 21:52:52 -- common/autotest_common.sh@852 -- # return 0 00:06:41.051 21:52:52 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.051 21:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.051 21:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.051 21:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.051 21:52:52 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.051 21:52:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.051 21:52:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.051 21:52:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:41.051 21:52:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.051 21:52:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:41.051 21:52:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.051 21:52:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:41.051 21:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.051 21:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.051 [2024-07-26 21:52:52.196696] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2014325 has claimed it. 00:06:41.051 request: 00:06:41.051 { 00:06:41.051 "method": "framework_enable_cpumask_locks", 00:06:41.051 "req_id": 1 00:06:41.051 } 00:06:41.051 Got JSON-RPC error response 00:06:41.051 response: 00:06:41.051 { 00:06:41.051 "code": -32603, 00:06:41.051 "message": "Failed to claim CPU core: 2" 00:06:41.051 } 00:06:41.051 21:52:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:41.051 21:52:52 -- common/autotest_common.sh@643 -- # es=1 00:06:41.051 21:52:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.051 21:52:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.051 21:52:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.051 21:52:52 -- event/cpu_locks.sh@158 -- # waitforlisten 2014325 /var/tmp/spdk.sock 00:06:41.051 21:52:52 -- common/autotest_common.sh@819 -- # '[' -z 2014325 ']' 00:06:41.051 21:52:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.051 21:52:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.051 21:52:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.051 21:52:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.051 21:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.310 21:52:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.310 21:52:52 -- common/autotest_common.sh@852 -- # return 0 00:06:41.310 21:52:52 -- event/cpu_locks.sh@159 -- # waitforlisten 2014492 /var/tmp/spdk2.sock 00:06:41.310 21:52:52 -- common/autotest_common.sh@819 -- # '[' -z 2014492 ']' 00:06:41.310 21:52:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.310 21:52:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.310 21:52:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.310 21:52:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.310 21:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.570 21:52:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.570 21:52:52 -- common/autotest_common.sh@852 -- # return 0 00:06:41.570 21:52:52 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:41.570 21:52:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:41.570 21:52:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:41.570 21:52:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:41.570 00:06:41.570 real 0m2.073s 00:06:41.570 user 0m0.798s 00:06:41.570 sys 0m0.209s 00:06:41.570 21:52:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.570 21:52:52 -- common/autotest_common.sh@10 -- # set +x 00:06:41.570 ************************************ 00:06:41.570 END TEST locking_overlapped_coremask_via_rpc 00:06:41.570 ************************************ 00:06:41.570 21:52:52 -- event/cpu_locks.sh@174 -- # cleanup 00:06:41.570 21:52:52 -- event/cpu_locks.sh@15 -- # [[ -z 2014325 ]] 00:06:41.570 21:52:52 -- event/cpu_locks.sh@15 -- # killprocess 2014325 00:06:41.570 21:52:52 -- common/autotest_common.sh@926 -- # '[' -z 2014325 ']' 00:06:41.570 21:52:52 -- common/autotest_common.sh@930 -- # kill -0 2014325 00:06:41.570 21:52:52 -- common/autotest_common.sh@931 -- # uname 00:06:41.570 21:52:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.570 21:52:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2014325 00:06:41.570 21:52:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.570 21:52:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.570 21:52:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2014325' 00:06:41.570 killing process with pid 2014325 00:06:41.570 21:52:52 -- common/autotest_common.sh@945 -- # kill 2014325 00:06:41.570 21:52:52 -- common/autotest_common.sh@950 -- # wait 2014325 00:06:41.829 21:52:52 -- event/cpu_locks.sh@16 -- # [[ -z 2014492 ]] 00:06:41.829 21:52:52 -- event/cpu_locks.sh@16 -- # killprocess 2014492 00:06:41.829 21:52:52 -- common/autotest_common.sh@926 -- # '[' -z 2014492 ']' 00:06:41.829 21:52:52 -- common/autotest_common.sh@930 -- # kill -0 2014492 00:06:41.829 21:52:52 -- common/autotest_common.sh@931 -- # uname 00:06:41.829 21:52:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.829 21:52:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2014492 00:06:41.829 21:52:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:41.829 21:52:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:41.829 21:52:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2014492' 00:06:41.829 killing process with pid 2014492 00:06:41.829 21:52:53 -- common/autotest_common.sh@945 -- # kill 2014492 00:06:41.829 21:52:53 -- common/autotest_common.sh@950 -- # wait 2014492 00:06:42.398 21:52:53 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.398 21:52:53 -- event/cpu_locks.sh@1 -- # cleanup 00:06:42.398 21:52:53 -- event/cpu_locks.sh@15 -- # [[ -z 2014325 ]] 00:06:42.398 21:52:53 -- event/cpu_locks.sh@15 -- # killprocess 2014325 00:06:42.398 21:52:53 -- common/autotest_common.sh@926 -- # '[' -z 2014325 ']' 00:06:42.398 21:52:53 -- common/autotest_common.sh@930 -- # kill -0 2014325 00:06:42.398 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2014325) - No such process 00:06:42.398 21:52:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2014325 is not found' 00:06:42.398 Process with pid 2014325 is not found 00:06:42.398 21:52:53 -- event/cpu_locks.sh@16 -- # [[ -z 2014492 ]] 00:06:42.398 21:52:53 -- event/cpu_locks.sh@16 -- # killprocess 2014492 00:06:42.398 21:52:53 -- common/autotest_common.sh@926 -- # '[' -z 2014492 ']' 00:06:42.398 21:52:53 -- common/autotest_common.sh@930 -- # kill -0 2014492 00:06:42.398 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2014492) - No such process 00:06:42.398 21:52:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2014492 is not found' 00:06:42.398 Process with pid 2014492 is not found 00:06:42.398 21:52:53 -- event/cpu_locks.sh@18 -- # rm -f 00:06:42.398 00:06:42.398 real 0m18.047s 00:06:42.398 user 0m30.141s 00:06:42.398 sys 0m6.082s 00:06:42.398 21:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.398 21:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 ************************************ 00:06:42.398 END TEST cpu_locks 00:06:42.398 ************************************ 00:06:42.398 00:06:42.398 real 0m42.545s 00:06:42.398 user 1m19.089s 00:06:42.398 sys 0m10.204s 00:06:42.398 21:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.398 21:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 ************************************ 00:06:42.398 END TEST event 00:06:42.398 ************************************ 00:06:42.398 21:52:53 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:42.398 21:52:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.398 21:52:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.398 21:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 ************************************ 00:06:42.398 START TEST thread 00:06:42.398 ************************************ 00:06:42.398 21:52:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:42.398 * Looking for test storage... 00:06:42.398 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:42.398 21:52:53 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.398 21:52:53 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:42.398 21:52:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.398 21:52:53 -- common/autotest_common.sh@10 -- # set +x 00:06:42.398 ************************************ 00:06:42.398 START TEST thread_poller_perf 00:06:42.398 ************************************ 00:06:42.398 21:52:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:42.398 [2024-07-26 21:52:53.533736] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.398 [2024-07-26 21:52:53.533832] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014968 ] 00:06:42.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.398 [2024-07-26 21:52:53.619505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.657 [2024-07-26 21:52:53.655985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.657 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:43.595 ====================================== 00:06:43.595 busy:2506006316 (cyc) 00:06:43.595 total_run_count: 418000 00:06:43.595 tsc_hz: 2500000000 (cyc) 00:06:43.595 ====================================== 00:06:43.595 poller_cost: 5995 (cyc), 2398 (nsec) 00:06:43.595 00:06:43.595 real 0m1.204s 00:06:43.595 user 0m1.096s 00:06:43.595 sys 0m0.103s 00:06:43.595 21:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.595 21:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.595 ************************************ 00:06:43.595 END TEST thread_poller_perf 00:06:43.595 ************************************ 00:06:43.595 21:52:54 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.595 21:52:54 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:43.595 21:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.595 21:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:43.595 ************************************ 00:06:43.595 START TEST thread_poller_perf 00:06:43.595 ************************************ 00:06:43.595 21:52:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:43.595 [2024-07-26 21:52:54.788695] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:43.595 [2024-07-26 21:52:54.788804] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015252 ] 00:06:43.854 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.854 [2024-07-26 21:52:54.874381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.854 [2024-07-26 21:52:54.907636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.854 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:44.789 ====================================== 00:06:44.789 busy:2502594334 (cyc) 00:06:44.789 total_run_count: 5710000 00:06:44.789 tsc_hz: 2500000000 (cyc) 00:06:44.789 ====================================== 00:06:44.789 poller_cost: 438 (cyc), 175 (nsec) 00:06:44.789 00:06:44.789 real 0m1.198s 00:06:44.789 user 0m1.102s 00:06:44.789 sys 0m0.093s 00:06:44.789 21:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.789 21:52:55 -- common/autotest_common.sh@10 -- # set +x 00:06:44.789 ************************************ 00:06:44.789 END TEST thread_poller_perf 00:06:44.789 ************************************ 00:06:44.789 21:52:55 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:44.789 00:06:44.789 real 0m2.580s 00:06:44.789 user 0m2.266s 00:06:44.789 sys 0m0.330s 00:06:44.789 21:52:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.789 21:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:44.789 ************************************ 00:06:44.789 END TEST thread 00:06:44.790 ************************************ 00:06:45.047 21:52:56 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:45.047 21:52:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.047 21:52:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.047 21:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.047 ************************************ 00:06:45.047 START TEST accel 00:06:45.047 ************************************ 00:06:45.047 21:52:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:45.047 * Looking for test storage... 00:06:45.047 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:45.047 21:52:56 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:45.047 21:52:56 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:45.047 21:52:56 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.047 21:52:56 -- accel/accel.sh@59 -- # spdk_tgt_pid=2015540 00:06:45.048 21:52:56 -- accel/accel.sh@60 -- # waitforlisten 2015540 00:06:45.048 21:52:56 -- common/autotest_common.sh@819 -- # '[' -z 2015540 ']' 00:06:45.048 21:52:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.048 21:52:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.048 21:52:56 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:45.048 21:52:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.048 21:52:56 -- accel/accel.sh@58 -- # build_accel_config 00:06:45.048 21:52:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.048 21:52:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.048 21:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.048 21:52:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.048 21:52:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.048 21:52:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.048 21:52:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.048 21:52:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.048 21:52:56 -- accel/accel.sh@42 -- # jq -r . 00:06:45.048 [2024-07-26 21:52:56.204726] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:45.048 [2024-07-26 21:52:56.204785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015540 ] 00:06:45.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.305 [2024-07-26 21:52:56.289312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.305 [2024-07-26 21:52:56.325796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.305 [2024-07-26 21:52:56.325916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.872 21:52:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.872 21:52:56 -- common/autotest_common.sh@852 -- # return 0 00:06:45.872 21:52:56 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:45.872 21:52:56 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:45.872 21:52:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:45.872 21:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:45.872 21:52:56 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:45.872 21:52:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.872 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.872 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.872 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.873 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.873 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.873 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.873 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.873 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.873 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.873 21:52:57 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # IFS== 00:06:45.873 21:52:57 -- accel/accel.sh@64 -- # read -r opc module 00:06:45.873 21:52:57 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:45.873 21:52:57 -- accel/accel.sh@67 -- # killprocess 2015540 00:06:45.873 21:52:57 -- common/autotest_common.sh@926 -- # '[' -z 2015540 ']' 00:06:45.873 21:52:57 -- common/autotest_common.sh@930 -- # kill -0 2015540 00:06:45.873 21:52:57 -- common/autotest_common.sh@931 -- # uname 00:06:45.873 21:52:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.873 21:52:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2015540 00:06:45.873 21:52:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.873 21:52:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.873 21:52:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2015540' 00:06:45.873 killing process with pid 2015540 00:06:45.873 21:52:57 -- common/autotest_common.sh@945 -- # kill 2015540 00:06:45.873 21:52:57 -- common/autotest_common.sh@950 -- # wait 2015540 00:06:46.440 21:52:57 -- accel/accel.sh@68 -- # trap - ERR 00:06:46.440 21:52:57 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:46.440 21:52:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:46.440 21:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.440 21:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.440 21:52:57 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:46.440 21:52:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:46.440 21:52:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.440 21:52:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.440 21:52:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.440 21:52:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.440 21:52:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.440 21:52:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.440 21:52:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.440 21:52:57 -- accel/accel.sh@42 -- # jq -r . 00:06:46.440 21:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.440 21:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.440 21:52:57 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:46.440 21:52:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.440 21:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.440 21:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.440 ************************************ 00:06:46.440 START TEST accel_missing_filename 00:06:46.440 ************************************ 00:06:46.440 21:52:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:46.440 21:52:57 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.440 21:52:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:46.440 21:52:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.440 21:52:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.440 21:52:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.440 21:52:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.440 21:52:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:46.440 21:52:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.440 21:52:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:46.440 21:52:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.440 21:52:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.440 21:52:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.440 21:52:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.440 21:52:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.440 21:52:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.440 21:52:57 -- accel/accel.sh@42 -- # jq -r . 00:06:46.440 [2024-07-26 21:52:57.494936] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.440 [2024-07-26 21:52:57.495006] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015736 ] 00:06:46.440 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.440 [2024-07-26 21:52:57.579327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.440 [2024-07-26 21:52:57.615533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.440 [2024-07-26 21:52:57.656588] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.699 [2024-07-26 21:52:57.716749] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:46.699 A filename is required. 00:06:46.699 21:52:57 -- common/autotest_common.sh@643 -- # es=234 00:06:46.699 21:52:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.699 21:52:57 -- common/autotest_common.sh@652 -- # es=106 00:06:46.699 21:52:57 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:46.699 21:52:57 -- common/autotest_common.sh@660 -- # es=1 00:06:46.699 21:52:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.699 00:06:46.699 real 0m0.312s 00:06:46.699 user 0m0.186s 00:06:46.699 sys 0m0.149s 00:06:46.699 21:52:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.699 21:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 ************************************ 00:06:46.699 END TEST accel_missing_filename 00:06:46.699 ************************************ 00:06:46.699 21:52:57 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:46.699 21:52:57 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:46.699 21:52:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.699 21:52:57 -- common/autotest_common.sh@10 -- # set +x 00:06:46.699 ************************************ 00:06:46.699 START TEST accel_compress_verify 00:06:46.699 ************************************ 00:06:46.699 21:52:57 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:46.699 21:52:57 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.699 21:52:57 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:46.699 21:52:57 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:46.699 21:52:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.699 21:52:57 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:46.699 21:52:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.699 21:52:57 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:46.699 21:52:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:46.699 21:52:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.699 21:52:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.699 21:52:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.699 21:52:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.699 21:52:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.699 21:52:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.699 21:52:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.699 21:52:57 -- accel/accel.sh@42 -- # jq -r . 00:06:46.699 [2024-07-26 21:52:57.858563] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.699 [2024-07-26 21:52:57.858639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015902 ] 00:06:46.699 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.958 [2024-07-26 21:52:57.943387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.958 [2024-07-26 21:52:57.978549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.958 [2024-07-26 21:52:58.019100] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.958 [2024-07-26 21:52:58.079003] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:46.958 00:06:46.958 Compression does not support the verify option, aborting. 00:06:46.958 21:52:58 -- common/autotest_common.sh@643 -- # es=161 00:06:46.958 21:52:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:46.958 21:52:58 -- common/autotest_common.sh@652 -- # es=33 00:06:46.958 21:52:58 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:46.958 21:52:58 -- common/autotest_common.sh@660 -- # es=1 00:06:46.958 21:52:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:46.958 00:06:46.958 real 0m0.313s 00:06:46.958 user 0m0.213s 00:06:46.958 sys 0m0.138s 00:06:46.958 21:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.958 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:46.958 ************************************ 00:06:46.958 END TEST accel_compress_verify 00:06:46.958 ************************************ 00:06:46.958 21:52:58 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:46.958 21:52:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:46.958 21:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.958 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 ************************************ 00:06:47.217 START TEST accel_wrong_workload 00:06:47.217 ************************************ 00:06:47.217 21:52:58 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:47.217 21:52:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.217 21:52:58 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:47.217 21:52:58 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.217 21:52:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.217 21:52:58 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.217 21:52:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.217 21:52:58 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:47.217 21:52:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.217 21:52:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:47.217 21:52:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.217 21:52:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.217 21:52:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.217 21:52:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.217 21:52:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.217 21:52:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.217 21:52:58 -- accel/accel.sh@42 -- # jq -r . 00:06:47.217 Unsupported workload type: foobar 00:06:47.217 [2024-07-26 21:52:58.219323] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:47.217 accel_perf options: 00:06:47.217 [-h help message] 00:06:47.217 [-q queue depth per core] 00:06:47.217 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.217 [-T number of threads per core 00:06:47.217 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.217 [-t time in seconds] 00:06:47.217 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.217 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.217 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.217 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.217 [-S for crc32c workload, use this seed value (default 0) 00:06:47.217 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.217 [-f for fill workload, use this BYTE value (default 255) 00:06:47.217 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.217 [-y verify result if this switch is on] 00:06:47.217 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.217 Can be used to spread operations across a wider range of memory. 00:06:47.217 Error: writing output failed: Broken pipe 00:06:47.217 21:52:58 -- common/autotest_common.sh@643 -- # es=1 00:06:47.217 21:52:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.217 21:52:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.217 21:52:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.217 00:06:47.217 real 0m0.037s 00:06:47.217 user 0m0.049s 00:06:47.217 sys 0m0.019s 00:06:47.217 21:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.217 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 ************************************ 00:06:47.217 END TEST accel_wrong_workload 00:06:47.217 ************************************ 00:06:47.217 21:52:58 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.217 21:52:58 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:47.217 21:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.217 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 ************************************ 00:06:47.217 START TEST accel_negative_buffers 00:06:47.217 ************************************ 00:06:47.217 21:52:58 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:47.217 21:52:58 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.217 21:52:58 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:47.218 21:52:58 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:47.218 21:52:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.218 21:52:58 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:47.218 21:52:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.218 21:52:58 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:47.218 21:52:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:47.218 21:52:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.218 21:52:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.218 21:52:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.218 21:52:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.218 21:52:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.218 21:52:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.218 21:52:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.218 21:52:58 -- accel/accel.sh@42 -- # jq -r . 00:06:47.218 -x option must be non-negative. 00:06:47.218 [2024-07-26 21:52:58.301144] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:47.218 accel_perf options: 00:06:47.218 [-h help message] 00:06:47.218 [-q queue depth per core] 00:06:47.218 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:47.218 [-T number of threads per core 00:06:47.218 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:47.218 [-t time in seconds] 00:06:47.218 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:47.218 [ dif_verify, , dif_generate, dif_generate_copy 00:06:47.218 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:47.218 [-l for compress/decompress workloads, name of uncompressed input file 00:06:47.218 [-S for crc32c workload, use this seed value (default 0) 00:06:47.218 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:47.218 [-f for fill workload, use this BYTE value (default 255) 00:06:47.218 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:47.218 [-y verify result if this switch is on] 00:06:47.218 [-a tasks to allocate per core (default: same value as -q)] 00:06:47.218 Can be used to spread operations across a wider range of memory. 00:06:47.218 21:52:58 -- common/autotest_common.sh@643 -- # es=1 00:06:47.218 21:52:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.218 21:52:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.218 21:52:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.218 00:06:47.218 real 0m0.035s 00:06:47.218 user 0m0.019s 00:06:47.218 sys 0m0.015s 00:06:47.218 21:52:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.218 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 ************************************ 00:06:47.218 END TEST accel_negative_buffers 00:06:47.218 ************************************ 00:06:47.218 Error: writing output failed: Broken pipe 00:06:47.218 21:52:58 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:47.218 21:52:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.218 21:52:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.218 21:52:58 -- common/autotest_common.sh@10 -- # set +x 00:06:47.218 ************************************ 00:06:47.218 START TEST accel_crc32c 00:06:47.218 ************************************ 00:06:47.218 21:52:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:47.218 21:52:58 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.218 21:52:58 -- accel/accel.sh@17 -- # local accel_module 00:06:47.218 21:52:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.218 21:52:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.218 21:52:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.218 21:52:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.218 21:52:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.218 21:52:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.218 21:52:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.218 21:52:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.218 21:52:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.218 21:52:58 -- accel/accel.sh@42 -- # jq -r . 00:06:47.218 [2024-07-26 21:52:58.381308] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.218 [2024-07-26 21:52:58.381383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2015965 ] 00:06:47.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.477 [2024-07-26 21:52:58.467409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.477 [2024-07-26 21:52:58.503279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.853 21:52:59 -- accel/accel.sh@18 -- # out=' 00:06:48.853 SPDK Configuration: 00:06:48.853 Core mask: 0x1 00:06:48.853 00:06:48.853 Accel Perf Configuration: 00:06:48.853 Workload Type: crc32c 00:06:48.853 CRC-32C seed: 32 00:06:48.853 Transfer size: 4096 bytes 00:06:48.853 Vector count 1 00:06:48.853 Module: software 00:06:48.853 Queue depth: 32 00:06:48.853 Allocate depth: 32 00:06:48.853 # threads/core: 1 00:06:48.853 Run time: 1 seconds 00:06:48.853 Verify: Yes 00:06:48.853 00:06:48.853 Running for 1 seconds... 00:06:48.853 00:06:48.853 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.853 ------------------------------------------------------------------------------------ 00:06:48.853 0,0 596960/s 2331 MiB/s 0 0 00:06:48.853 ==================================================================================== 00:06:48.853 Total 596960/s 2331 MiB/s 0 0' 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:48.853 21:52:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:48.853 21:52:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.853 21:52:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.853 21:52:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.853 21:52:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.853 21:52:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.853 21:52:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.853 21:52:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.853 21:52:59 -- accel/accel.sh@42 -- # jq -r . 00:06:48.853 [2024-07-26 21:52:59.695782] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:48.853 [2024-07-26 21:52:59.695867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016233 ] 00:06:48.853 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.853 [2024-07-26 21:52:59.782597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.853 [2024-07-26 21:52:59.818239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val=0x1 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val=crc32c 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val=32 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.853 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.853 21:52:59 -- accel/accel.sh@21 -- # val=software 00:06:48.853 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.853 21:52:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val=32 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val=32 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val=1 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val=Yes 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:48.854 21:52:59 -- accel/accel.sh@21 -- # val= 00:06:48.854 21:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:48.854 21:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:49.791 21:53:00 -- accel/accel.sh@21 -- # val= 00:06:49.791 21:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.791 21:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.791 21:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.791 21:53:00 -- accel/accel.sh@21 -- # val= 00:06:49.791 21:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.791 21:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.791 21:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.791 21:53:00 -- accel/accel.sh@21 -- # val= 00:06:49.791 21:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.791 21:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.791 21:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.791 21:53:00 -- accel/accel.sh@21 -- # val= 00:06:49.792 21:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.792 21:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.792 21:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.792 21:53:00 -- accel/accel.sh@21 -- # val= 00:06:49.792 21:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.792 21:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.792 21:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.792 21:53:00 -- accel/accel.sh@21 -- # val= 00:06:49.792 21:53:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.792 21:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:49.792 21:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:49.792 21:53:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.792 21:53:00 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:49.792 21:53:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.792 00:06:49.792 real 0m2.636s 00:06:49.792 user 0m2.347s 00:06:49.792 sys 0m0.297s 00:06:49.792 21:53:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.792 21:53:00 -- common/autotest_common.sh@10 -- # set +x 00:06:49.792 ************************************ 00:06:49.792 END TEST accel_crc32c 00:06:49.792 ************************************ 00:06:50.051 21:53:01 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:50.051 21:53:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:50.051 21:53:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.051 21:53:01 -- common/autotest_common.sh@10 -- # set +x 00:06:50.051 ************************************ 00:06:50.051 START TEST accel_crc32c_C2 00:06:50.051 ************************************ 00:06:50.051 21:53:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:50.051 21:53:01 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.051 21:53:01 -- accel/accel.sh@17 -- # local accel_module 00:06:50.051 21:53:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.051 21:53:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.051 21:53:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.051 21:53:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.051 21:53:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.051 21:53:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.051 21:53:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.051 21:53:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.051 21:53:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.051 21:53:01 -- accel/accel.sh@42 -- # jq -r . 00:06:50.051 [2024-07-26 21:53:01.063620] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:50.051 [2024-07-26 21:53:01.063691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016519 ] 00:06:50.051 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.051 [2024-07-26 21:53:01.147690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.051 [2024-07-26 21:53:01.182681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.430 21:53:02 -- accel/accel.sh@18 -- # out=' 00:06:51.430 SPDK Configuration: 00:06:51.430 Core mask: 0x1 00:06:51.430 00:06:51.430 Accel Perf Configuration: 00:06:51.430 Workload Type: crc32c 00:06:51.430 CRC-32C seed: 0 00:06:51.430 Transfer size: 4096 bytes 00:06:51.430 Vector count 2 00:06:51.430 Module: software 00:06:51.430 Queue depth: 32 00:06:51.430 Allocate depth: 32 00:06:51.430 # threads/core: 1 00:06:51.430 Run time: 1 seconds 00:06:51.430 Verify: Yes 00:06:51.430 00:06:51.430 Running for 1 seconds... 00:06:51.430 00:06:51.430 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.430 ------------------------------------------------------------------------------------ 00:06:51.430 0,0 474528/s 3707 MiB/s 0 0 00:06:51.430 ==================================================================================== 00:06:51.430 Total 474528/s 1853 MiB/s 0 0' 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:51.430 21:53:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:51.430 21:53:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.430 21:53:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.430 21:53:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.430 21:53:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.430 21:53:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.430 21:53:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.430 21:53:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.430 21:53:02 -- accel/accel.sh@42 -- # jq -r . 00:06:51.430 [2024-07-26 21:53:02.375619] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.430 [2024-07-26 21:53:02.375692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016754 ] 00:06:51.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.430 [2024-07-26 21:53:02.459725] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.430 [2024-07-26 21:53:02.493894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val=0x1 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val=crc32c 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val=0 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val=software 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.430 21:53:02 -- accel/accel.sh@21 -- # val=32 00:06:51.430 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.430 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.431 21:53:02 -- accel/accel.sh@21 -- # val=32 00:06:51.431 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.431 21:53:02 -- accel/accel.sh@21 -- # val=1 00:06:51.431 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.431 21:53:02 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.431 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.431 21:53:02 -- accel/accel.sh@21 -- # val=Yes 00:06:51.431 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.431 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.431 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:51.431 21:53:02 -- accel/accel.sh@21 -- # val= 00:06:51.431 21:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:51.431 21:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@21 -- # val= 00:06:52.808 21:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@21 -- # val= 00:06:52.808 21:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@21 -- # val= 00:06:52.808 21:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@21 -- # val= 00:06:52.808 21:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@21 -- # val= 00:06:52.808 21:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@21 -- # val= 00:06:52.808 21:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:52.808 21:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:52.808 21:53:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.808 21:53:03 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:52.808 21:53:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.808 00:06:52.808 real 0m2.626s 00:06:52.808 user 0m2.357s 00:06:52.808 sys 0m0.278s 00:06:52.808 21:53:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.808 21:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:52.808 ************************************ 00:06:52.808 END TEST accel_crc32c_C2 00:06:52.808 ************************************ 00:06:52.808 21:53:03 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:52.808 21:53:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:52.808 21:53:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.808 21:53:03 -- common/autotest_common.sh@10 -- # set +x 00:06:52.808 ************************************ 00:06:52.808 START TEST accel_copy 00:06:52.808 ************************************ 00:06:52.808 21:53:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:52.808 21:53:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.808 21:53:03 -- accel/accel.sh@17 -- # local accel_module 00:06:52.808 21:53:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:52.808 21:53:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:52.808 21:53:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.808 21:53:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.808 21:53:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.808 21:53:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.808 21:53:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.808 21:53:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.808 21:53:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.808 21:53:03 -- accel/accel.sh@42 -- # jq -r . 00:06:52.808 [2024-07-26 21:53:03.741261] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:52.808 [2024-07-26 21:53:03.741329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2016951 ] 00:06:52.808 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.808 [2024-07-26 21:53:03.825369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.808 [2024-07-26 21:53:03.860656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.188 21:53:05 -- accel/accel.sh@18 -- # out=' 00:06:54.188 SPDK Configuration: 00:06:54.188 Core mask: 0x1 00:06:54.188 00:06:54.188 Accel Perf Configuration: 00:06:54.188 Workload Type: copy 00:06:54.188 Transfer size: 4096 bytes 00:06:54.188 Vector count 1 00:06:54.188 Module: software 00:06:54.188 Queue depth: 32 00:06:54.188 Allocate depth: 32 00:06:54.188 # threads/core: 1 00:06:54.188 Run time: 1 seconds 00:06:54.188 Verify: Yes 00:06:54.188 00:06:54.188 Running for 1 seconds... 00:06:54.188 00:06:54.188 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.188 ------------------------------------------------------------------------------------ 00:06:54.188 0,0 453760/s 1772 MiB/s 0 0 00:06:54.188 ==================================================================================== 00:06:54.188 Total 453760/s 1772 MiB/s 0 0' 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:54.188 21:53:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:54.188 21:53:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.188 21:53:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.188 21:53:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.188 21:53:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.188 21:53:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.188 21:53:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.188 21:53:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.188 21:53:05 -- accel/accel.sh@42 -- # jq -r . 00:06:54.188 [2024-07-26 21:53:05.054048] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:54.188 [2024-07-26 21:53:05.054114] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017106 ] 00:06:54.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.188 [2024-07-26 21:53:05.140882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.188 [2024-07-26 21:53:05.175313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=0x1 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=copy 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=software 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=32 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=32 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=1 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val=Yes 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:54.188 21:53:05 -- accel/accel.sh@21 -- # val= 00:06:54.188 21:53:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # IFS=: 00:06:54.188 21:53:05 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@21 -- # val= 00:06:55.125 21:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@21 -- # val= 00:06:55.125 21:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@21 -- # val= 00:06:55.125 21:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@21 -- # val= 00:06:55.125 21:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@21 -- # val= 00:06:55.125 21:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@21 -- # val= 00:06:55.125 21:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # IFS=: 00:06:55.125 21:53:06 -- accel/accel.sh@20 -- # read -r var val 00:06:55.125 21:53:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.125 21:53:06 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:55.125 21:53:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.125 00:06:55.125 real 0m2.632s 00:06:55.125 user 0m2.356s 00:06:55.125 sys 0m0.285s 00:06:55.125 21:53:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.125 21:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:55.125 ************************************ 00:06:55.125 END TEST accel_copy 00:06:55.125 ************************************ 00:06:55.383 21:53:06 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.383 21:53:06 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:55.383 21:53:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.383 21:53:06 -- common/autotest_common.sh@10 -- # set +x 00:06:55.383 ************************************ 00:06:55.383 START TEST accel_fill 00:06:55.383 ************************************ 00:06:55.383 21:53:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.383 21:53:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.383 21:53:06 -- accel/accel.sh@17 -- # local accel_module 00:06:55.383 21:53:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.383 21:53:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.383 21:53:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.383 21:53:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.383 21:53:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.383 21:53:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.383 21:53:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.383 21:53:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.383 21:53:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.383 21:53:06 -- accel/accel.sh@42 -- # jq -r . 00:06:55.383 [2024-07-26 21:53:06.420725] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:55.383 [2024-07-26 21:53:06.420793] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017383 ] 00:06:55.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.383 [2024-07-26 21:53:06.504551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.383 [2024-07-26 21:53:06.540042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.825 21:53:07 -- accel/accel.sh@18 -- # out=' 00:06:56.825 SPDK Configuration: 00:06:56.825 Core mask: 0x1 00:06:56.825 00:06:56.825 Accel Perf Configuration: 00:06:56.825 Workload Type: fill 00:06:56.825 Fill pattern: 0x80 00:06:56.825 Transfer size: 4096 bytes 00:06:56.825 Vector count 1 00:06:56.825 Module: software 00:06:56.825 Queue depth: 64 00:06:56.825 Allocate depth: 64 00:06:56.825 # threads/core: 1 00:06:56.825 Run time: 1 seconds 00:06:56.825 Verify: Yes 00:06:56.825 00:06:56.825 Running for 1 seconds... 00:06:56.825 00:06:56.825 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.825 ------------------------------------------------------------------------------------ 00:06:56.825 0,0 699712/s 2733 MiB/s 0 0 00:06:56.825 ==================================================================================== 00:06:56.825 Total 699712/s 2733 MiB/s 0 0' 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.825 21:53:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:56.825 21:53:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.825 21:53:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.825 21:53:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.825 21:53:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.825 21:53:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.825 21:53:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.825 21:53:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.825 21:53:07 -- accel/accel.sh@42 -- # jq -r . 00:06:56.825 [2024-07-26 21:53:07.733159] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:56.825 [2024-07-26 21:53:07.733229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017653 ] 00:06:56.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.825 [2024-07-26 21:53:07.817102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.825 [2024-07-26 21:53:07.851218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val=0x1 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val=fill 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val=0x80 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val=software 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@23 -- # accel_module=software 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val=64 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.825 21:53:07 -- accel/accel.sh@21 -- # val=64 00:06:56.825 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.825 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.826 21:53:07 -- accel/accel.sh@21 -- # val=1 00:06:56.826 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.826 21:53:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:56.826 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.826 21:53:07 -- accel/accel.sh@21 -- # val=Yes 00:06:56.826 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.826 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.826 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:56.826 21:53:07 -- accel/accel.sh@21 -- # val= 00:06:56.826 21:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # IFS=: 00:06:56.826 21:53:07 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@21 -- # val= 00:06:58.205 21:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@21 -- # val= 00:06:58.205 21:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@21 -- # val= 00:06:58.205 21:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@21 -- # val= 00:06:58.205 21:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@21 -- # val= 00:06:58.205 21:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@21 -- # val= 00:06:58.205 21:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # IFS=: 00:06:58.205 21:53:09 -- accel/accel.sh@20 -- # read -r var val 00:06:58.205 21:53:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.205 21:53:09 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:58.205 21:53:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.205 00:06:58.205 real 0m2.629s 00:06:58.205 user 0m2.366s 00:06:58.205 sys 0m0.270s 00:06:58.205 21:53:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.205 21:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 ************************************ 00:06:58.205 END TEST accel_fill 00:06:58.205 ************************************ 00:06:58.205 21:53:09 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:58.205 21:53:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.205 21:53:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.205 21:53:09 -- common/autotest_common.sh@10 -- # set +x 00:06:58.205 ************************************ 00:06:58.205 START TEST accel_copy_crc32c 00:06:58.205 ************************************ 00:06:58.205 21:53:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:58.205 21:53:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.205 21:53:09 -- accel/accel.sh@17 -- # local accel_module 00:06:58.205 21:53:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.205 21:53:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.205 21:53:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.205 21:53:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.205 21:53:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.205 21:53:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.205 21:53:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.205 21:53:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.205 21:53:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.205 21:53:09 -- accel/accel.sh@42 -- # jq -r . 00:06:58.205 [2024-07-26 21:53:09.098452] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.205 [2024-07-26 21:53:09.098519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017939 ] 00:06:58.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.205 [2024-07-26 21:53:09.182388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.205 [2024-07-26 21:53:09.217347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.585 21:53:10 -- accel/accel.sh@18 -- # out=' 00:06:59.585 SPDK Configuration: 00:06:59.585 Core mask: 0x1 00:06:59.585 00:06:59.585 Accel Perf Configuration: 00:06:59.585 Workload Type: copy_crc32c 00:06:59.585 CRC-32C seed: 0 00:06:59.585 Vector size: 4096 bytes 00:06:59.585 Transfer size: 4096 bytes 00:06:59.585 Vector count 1 00:06:59.585 Module: software 00:06:59.585 Queue depth: 32 00:06:59.585 Allocate depth: 32 00:06:59.585 # threads/core: 1 00:06:59.585 Run time: 1 seconds 00:06:59.585 Verify: Yes 00:06:59.585 00:06:59.585 Running for 1 seconds... 00:06:59.585 00:06:59.585 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.585 ------------------------------------------------------------------------------------ 00:06:59.585 0,0 337568/s 1318 MiB/s 0 0 00:06:59.585 ==================================================================================== 00:06:59.585 Total 337568/s 1318 MiB/s 0 0' 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:59.585 21:53:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:59.585 21:53:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.585 21:53:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.585 21:53:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.585 21:53:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.585 21:53:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.585 21:53:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.585 21:53:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.585 21:53:10 -- accel/accel.sh@42 -- # jq -r . 00:06:59.585 [2024-07-26 21:53:10.412340] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:59.585 [2024-07-26 21:53:10.412408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018211 ] 00:06:59.585 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.585 [2024-07-26 21:53:10.498141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.585 [2024-07-26 21:53:10.532852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=0x1 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=0 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=software 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=32 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=32 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=1 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val=Yes 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:06:59.585 21:53:10 -- accel/accel.sh@21 -- # val= 00:06:59.585 21:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # IFS=: 00:06:59.585 21:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@21 -- # val= 00:07:00.524 21:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@21 -- # val= 00:07:00.524 21:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@21 -- # val= 00:07:00.524 21:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@21 -- # val= 00:07:00.524 21:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@21 -- # val= 00:07:00.524 21:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@21 -- # val= 00:07:00.524 21:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:00.524 21:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:00.524 21:53:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:00.524 21:53:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:00.524 21:53:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.524 00:07:00.524 real 0m2.634s 00:07:00.524 user 0m2.365s 00:07:00.524 sys 0m0.279s 00:07:00.524 21:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.524 21:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.524 ************************************ 00:07:00.524 END TEST accel_copy_crc32c 00:07:00.524 ************************************ 00:07:00.524 21:53:11 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.524 21:53:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.524 21:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.524 21:53:11 -- common/autotest_common.sh@10 -- # set +x 00:07:00.783 ************************************ 00:07:00.783 START TEST accel_copy_crc32c_C2 00:07:00.783 ************************************ 00:07:00.783 21:53:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:00.783 21:53:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.783 21:53:11 -- accel/accel.sh@17 -- # local accel_module 00:07:00.783 21:53:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.783 21:53:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.783 21:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.783 21:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.783 21:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.783 21:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.783 21:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.783 21:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.783 21:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.783 21:53:11 -- accel/accel.sh@42 -- # jq -r . 00:07:00.783 [2024-07-26 21:53:11.779676] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:00.784 [2024-07-26 21:53:11.779743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018492 ] 00:07:00.784 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.784 [2024-07-26 21:53:11.862772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.784 [2024-07-26 21:53:11.897971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.163 21:53:13 -- accel/accel.sh@18 -- # out=' 00:07:02.163 SPDK Configuration: 00:07:02.163 Core mask: 0x1 00:07:02.163 00:07:02.163 Accel Perf Configuration: 00:07:02.163 Workload Type: copy_crc32c 00:07:02.163 CRC-32C seed: 0 00:07:02.163 Vector size: 4096 bytes 00:07:02.163 Transfer size: 8192 bytes 00:07:02.163 Vector count 2 00:07:02.163 Module: software 00:07:02.163 Queue depth: 32 00:07:02.163 Allocate depth: 32 00:07:02.163 # threads/core: 1 00:07:02.163 Run time: 1 seconds 00:07:02.163 Verify: Yes 00:07:02.163 00:07:02.163 Running for 1 seconds... 00:07:02.163 00:07:02.163 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.163 ------------------------------------------------------------------------------------ 00:07:02.163 0,0 252032/s 1969 MiB/s 0 0 00:07:02.163 ==================================================================================== 00:07:02.163 Total 252032/s 984 MiB/s 0 0' 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:02.163 21:53:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:02.163 21:53:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.163 21:53:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.163 21:53:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.163 21:53:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.163 21:53:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.163 21:53:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.163 21:53:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.163 21:53:13 -- accel/accel.sh@42 -- # jq -r . 00:07:02.163 [2024-07-26 21:53:13.092146] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:02.163 [2024-07-26 21:53:13.092214] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018717 ] 00:07:02.163 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.163 [2024-07-26 21:53:13.175241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.163 [2024-07-26 21:53:13.209772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val=0x1 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val=0 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.163 21:53:13 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:02.163 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.163 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val=software 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val=32 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val=32 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val=1 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val=Yes 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:02.164 21:53:13 -- accel/accel.sh@21 -- # val= 00:07:02.164 21:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:02.164 21:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@21 -- # val= 00:07:03.541 21:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@21 -- # val= 00:07:03.541 21:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@21 -- # val= 00:07:03.541 21:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@21 -- # val= 00:07:03.541 21:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@21 -- # val= 00:07:03.541 21:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@21 -- # val= 00:07:03.541 21:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:03.541 21:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:03.541 21:53:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.541 21:53:14 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:03.541 21:53:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.541 00:07:03.541 real 0m2.629s 00:07:03.541 user 0m2.358s 00:07:03.541 sys 0m0.279s 00:07:03.541 21:53:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.541 21:53:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.541 ************************************ 00:07:03.541 END TEST accel_copy_crc32c_C2 00:07:03.541 ************************************ 00:07:03.541 21:53:14 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:03.541 21:53:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:03.541 21:53:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.541 21:53:14 -- common/autotest_common.sh@10 -- # set +x 00:07:03.541 ************************************ 00:07:03.541 START TEST accel_dualcast 00:07:03.541 ************************************ 00:07:03.541 21:53:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:03.541 21:53:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.541 21:53:14 -- accel/accel.sh@17 -- # local accel_module 00:07:03.541 21:53:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:03.541 21:53:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.541 21:53:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.541 21:53:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.541 21:53:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.541 21:53:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.541 21:53:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.541 21:53:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.541 21:53:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.541 21:53:14 -- accel/accel.sh@42 -- # jq -r . 00:07:03.541 [2024-07-26 21:53:14.458130] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:03.541 [2024-07-26 21:53:14.458198] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2018915 ] 00:07:03.541 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.541 [2024-07-26 21:53:14.541816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.541 [2024-07-26 21:53:14.576881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.920 21:53:15 -- accel/accel.sh@18 -- # out=' 00:07:04.920 SPDK Configuration: 00:07:04.920 Core mask: 0x1 00:07:04.920 00:07:04.920 Accel Perf Configuration: 00:07:04.920 Workload Type: dualcast 00:07:04.920 Transfer size: 4096 bytes 00:07:04.920 Vector count 1 00:07:04.920 Module: software 00:07:04.920 Queue depth: 32 00:07:04.920 Allocate depth: 32 00:07:04.920 # threads/core: 1 00:07:04.920 Run time: 1 seconds 00:07:04.920 Verify: Yes 00:07:04.920 00:07:04.920 Running for 1 seconds... 00:07:04.920 00:07:04.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.920 ------------------------------------------------------------------------------------ 00:07:04.920 0,0 539072/s 2105 MiB/s 0 0 00:07:04.920 ==================================================================================== 00:07:04.920 Total 539072/s 2105 MiB/s 0 0' 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:04.920 21:53:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:04.920 21:53:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.920 21:53:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.920 21:53:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.920 21:53:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.920 21:53:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.920 21:53:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.920 21:53:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.920 21:53:15 -- accel/accel.sh@42 -- # jq -r . 00:07:04.920 [2024-07-26 21:53:15.769910] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:04.920 [2024-07-26 21:53:15.770003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019076 ] 00:07:04.920 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.920 [2024-07-26 21:53:15.854774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.920 [2024-07-26 21:53:15.888973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=0x1 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=dualcast 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=software 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=32 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=32 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=1 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val=Yes 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:04.920 21:53:15 -- accel/accel.sh@21 -- # val= 00:07:04.920 21:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:04.920 21:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@21 -- # val= 00:07:05.858 21:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@21 -- # val= 00:07:05.858 21:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@21 -- # val= 00:07:05.858 21:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@21 -- # val= 00:07:05.858 21:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@21 -- # val= 00:07:05.858 21:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@21 -- # val= 00:07:05.858 21:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:05.858 21:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:05.858 21:53:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.858 21:53:17 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:05.858 21:53:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.858 00:07:05.858 real 0m2.629s 00:07:05.858 user 0m2.361s 00:07:05.858 sys 0m0.277s 00:07:05.858 21:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.858 21:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:05.858 ************************************ 00:07:05.858 END TEST accel_dualcast 00:07:05.858 ************************************ 00:07:06.118 21:53:17 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.118 21:53:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.118 21:53:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.118 21:53:17 -- common/autotest_common.sh@10 -- # set +x 00:07:06.118 ************************************ 00:07:06.118 START TEST accel_compare 00:07:06.118 ************************************ 00:07:06.118 21:53:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:06.118 21:53:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.118 21:53:17 -- accel/accel.sh@17 -- # local accel_module 00:07:06.118 21:53:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:06.118 21:53:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.118 21:53:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.118 21:53:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.118 21:53:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.118 21:53:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.118 21:53:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.118 21:53:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.118 21:53:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.118 21:53:17 -- accel/accel.sh@42 -- # jq -r . 00:07:06.118 [2024-07-26 21:53:17.136407] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:06.118 [2024-07-26 21:53:17.136474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019354 ] 00:07:06.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.118 [2024-07-26 21:53:17.220089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.118 [2024-07-26 21:53:17.255129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.496 21:53:18 -- accel/accel.sh@18 -- # out=' 00:07:07.496 SPDK Configuration: 00:07:07.496 Core mask: 0x1 00:07:07.496 00:07:07.496 Accel Perf Configuration: 00:07:07.496 Workload Type: compare 00:07:07.496 Transfer size: 4096 bytes 00:07:07.496 Vector count 1 00:07:07.496 Module: software 00:07:07.496 Queue depth: 32 00:07:07.496 Allocate depth: 32 00:07:07.496 # threads/core: 1 00:07:07.496 Run time: 1 seconds 00:07:07.496 Verify: Yes 00:07:07.496 00:07:07.496 Running for 1 seconds... 00:07:07.496 00:07:07.496 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.496 ------------------------------------------------------------------------------------ 00:07:07.496 0,0 653632/s 2553 MiB/s 0 0 00:07:07.496 ==================================================================================== 00:07:07.496 Total 653632/s 2553 MiB/s 0 0' 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:07.496 21:53:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:07.496 21:53:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.496 21:53:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.496 21:53:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.496 21:53:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.496 21:53:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.496 21:53:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.496 21:53:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.496 21:53:18 -- accel/accel.sh@42 -- # jq -r . 00:07:07.496 [2024-07-26 21:53:18.448750] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:07.496 [2024-07-26 21:53:18.448817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019622 ] 00:07:07.496 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.496 [2024-07-26 21:53:18.532397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.496 [2024-07-26 21:53:18.566490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=0x1 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=compare 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=software 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=32 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=32 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=1 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val=Yes 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.496 21:53:18 -- accel/accel.sh@21 -- # val= 00:07:07.496 21:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:07.496 21:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@21 -- # val= 00:07:08.873 21:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@21 -- # val= 00:07:08.873 21:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@21 -- # val= 00:07:08.873 21:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@21 -- # val= 00:07:08.873 21:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@21 -- # val= 00:07:08.873 21:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@21 -- # val= 00:07:08.873 21:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:08.873 21:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:08.873 21:53:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.873 21:53:19 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:08.873 21:53:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.873 00:07:08.873 real 0m2.629s 00:07:08.873 user 0m2.354s 00:07:08.873 sys 0m0.284s 00:07:08.873 21:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.873 21:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.873 ************************************ 00:07:08.873 END TEST accel_compare 00:07:08.873 ************************************ 00:07:08.873 21:53:19 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.873 21:53:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:08.873 21:53:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.873 21:53:19 -- common/autotest_common.sh@10 -- # set +x 00:07:08.873 ************************************ 00:07:08.873 START TEST accel_xor 00:07:08.873 ************************************ 00:07:08.873 21:53:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:08.873 21:53:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.873 21:53:19 -- accel/accel.sh@17 -- # local accel_module 00:07:08.873 21:53:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:08.873 21:53:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.873 21:53:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.873 21:53:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.873 21:53:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.873 21:53:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.873 21:53:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.873 21:53:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.873 21:53:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.873 21:53:19 -- accel/accel.sh@42 -- # jq -r . 00:07:08.873 [2024-07-26 21:53:19.814584] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.873 [2024-07-26 21:53:19.814656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2019909 ] 00:07:08.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.873 [2024-07-26 21:53:19.898354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.873 [2024-07-26 21:53:19.933264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.250 21:53:21 -- accel/accel.sh@18 -- # out=' 00:07:10.250 SPDK Configuration: 00:07:10.250 Core mask: 0x1 00:07:10.250 00:07:10.250 Accel Perf Configuration: 00:07:10.250 Workload Type: xor 00:07:10.250 Source buffers: 2 00:07:10.250 Transfer size: 4096 bytes 00:07:10.250 Vector count 1 00:07:10.250 Module: software 00:07:10.250 Queue depth: 32 00:07:10.250 Allocate depth: 32 00:07:10.250 # threads/core: 1 00:07:10.250 Run time: 1 seconds 00:07:10.250 Verify: Yes 00:07:10.250 00:07:10.250 Running for 1 seconds... 00:07:10.250 00:07:10.250 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.250 ------------------------------------------------------------------------------------ 00:07:10.251 0,0 484896/s 1894 MiB/s 0 0 00:07:10.251 ==================================================================================== 00:07:10.251 Total 484896/s 1894 MiB/s 0 0' 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.251 21:53:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.251 21:53:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.251 21:53:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.251 21:53:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.251 21:53:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.251 21:53:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.251 21:53:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.251 21:53:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.251 21:53:21 -- accel/accel.sh@42 -- # jq -r . 00:07:10.251 [2024-07-26 21:53:21.128790] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:10.251 [2024-07-26 21:53:21.128881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020177 ] 00:07:10.251 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.251 [2024-07-26 21:53:21.213084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.251 [2024-07-26 21:53:21.247453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=0x1 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=xor 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=2 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=software 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=32 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=32 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=1 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val=Yes 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.251 21:53:21 -- accel/accel.sh@21 -- # val= 00:07:10.251 21:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:10.251 21:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:11.188 21:53:22 -- accel/accel.sh@21 -- # val= 00:07:11.188 21:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.188 21:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.188 21:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.188 21:53:22 -- accel/accel.sh@21 -- # val= 00:07:11.188 21:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.188 21:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.447 21:53:22 -- accel/accel.sh@21 -- # val= 00:07:11.447 21:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.447 21:53:22 -- accel/accel.sh@21 -- # val= 00:07:11.447 21:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.447 21:53:22 -- accel/accel.sh@21 -- # val= 00:07:11.447 21:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.447 21:53:22 -- accel/accel.sh@21 -- # val= 00:07:11.447 21:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:11.447 21:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:11.447 21:53:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.447 21:53:22 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:11.447 21:53:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.447 00:07:11.447 real 0m2.635s 00:07:11.447 user 0m2.358s 00:07:11.447 sys 0m0.284s 00:07:11.447 21:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.447 21:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 ************************************ 00:07:11.447 END TEST accel_xor 00:07:11.447 ************************************ 00:07:11.447 21:53:22 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:11.447 21:53:22 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:11.447 21:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.447 21:53:22 -- common/autotest_common.sh@10 -- # set +x 00:07:11.447 ************************************ 00:07:11.447 START TEST accel_xor 00:07:11.447 ************************************ 00:07:11.447 21:53:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:11.447 21:53:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.447 21:53:22 -- accel/accel.sh@17 -- # local accel_module 00:07:11.447 21:53:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:11.447 21:53:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:11.447 21:53:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.447 21:53:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.447 21:53:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.447 21:53:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.447 21:53:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.447 21:53:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.447 21:53:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.447 21:53:22 -- accel/accel.sh@42 -- # jq -r . 00:07:11.447 [2024-07-26 21:53:22.495946] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:11.447 [2024-07-26 21:53:22.496011] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020458 ] 00:07:11.447 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.447 [2024-07-26 21:53:22.580213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.447 [2024-07-26 21:53:22.615559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.826 21:53:23 -- accel/accel.sh@18 -- # out=' 00:07:12.826 SPDK Configuration: 00:07:12.826 Core mask: 0x1 00:07:12.826 00:07:12.826 Accel Perf Configuration: 00:07:12.826 Workload Type: xor 00:07:12.826 Source buffers: 3 00:07:12.826 Transfer size: 4096 bytes 00:07:12.826 Vector count 1 00:07:12.826 Module: software 00:07:12.826 Queue depth: 32 00:07:12.826 Allocate depth: 32 00:07:12.826 # threads/core: 1 00:07:12.826 Run time: 1 seconds 00:07:12.826 Verify: Yes 00:07:12.826 00:07:12.826 Running for 1 seconds... 00:07:12.826 00:07:12.826 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.826 ------------------------------------------------------------------------------------ 00:07:12.826 0,0 472224/s 1844 MiB/s 0 0 00:07:12.826 ==================================================================================== 00:07:12.826 Total 472224/s 1844 MiB/s 0 0' 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.826 21:53:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:12.826 21:53:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:12.826 21:53:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.826 21:53:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.826 21:53:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.826 21:53:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.826 21:53:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.826 21:53:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.826 21:53:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.826 21:53:23 -- accel/accel.sh@42 -- # jq -r . 00:07:12.826 [2024-07-26 21:53:23.809940] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:12.826 [2024-07-26 21:53:23.810008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020642 ] 00:07:12.826 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.826 [2024-07-26 21:53:23.895393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.826 [2024-07-26 21:53:23.930390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.826 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.826 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.826 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.826 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.826 21:53:23 -- accel/accel.sh@21 -- # val=0x1 00:07:12.826 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.826 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.826 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.826 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.826 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.826 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=xor 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=3 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=software 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=32 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=32 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=1 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val=Yes 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:12.827 21:53:23 -- accel/accel.sh@21 -- # val= 00:07:12.827 21:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:12.827 21:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@21 -- # val= 00:07:14.267 21:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@21 -- # val= 00:07:14.267 21:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@21 -- # val= 00:07:14.267 21:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@21 -- # val= 00:07:14.267 21:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@21 -- # val= 00:07:14.267 21:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@21 -- # val= 00:07:14.267 21:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:14.267 21:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:14.267 21:53:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.267 21:53:25 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:14.267 21:53:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.267 00:07:14.267 real 0m2.635s 00:07:14.267 user 0m2.367s 00:07:14.267 sys 0m0.276s 00:07:14.267 21:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.268 21:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:14.268 ************************************ 00:07:14.268 END TEST accel_xor 00:07:14.268 ************************************ 00:07:14.268 21:53:25 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:14.268 21:53:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:14.268 21:53:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.268 21:53:25 -- common/autotest_common.sh@10 -- # set +x 00:07:14.268 ************************************ 00:07:14.268 START TEST accel_dif_verify 00:07:14.268 ************************************ 00:07:14.268 21:53:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:14.268 21:53:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.268 21:53:25 -- accel/accel.sh@17 -- # local accel_module 00:07:14.268 21:53:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:14.268 21:53:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:14.268 21:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.268 21:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.268 21:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.268 21:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.268 21:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.268 21:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.268 21:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.268 21:53:25 -- accel/accel.sh@42 -- # jq -r . 00:07:14.268 [2024-07-26 21:53:25.178859] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:14.268 [2024-07-26 21:53:25.178934] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2020852 ] 00:07:14.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.268 [2024-07-26 21:53:25.265322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.268 [2024-07-26 21:53:25.300884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.648 21:53:26 -- accel/accel.sh@18 -- # out=' 00:07:15.648 SPDK Configuration: 00:07:15.648 Core mask: 0x1 00:07:15.648 00:07:15.648 Accel Perf Configuration: 00:07:15.648 Workload Type: dif_verify 00:07:15.648 Vector size: 4096 bytes 00:07:15.648 Transfer size: 4096 bytes 00:07:15.648 Block size: 512 bytes 00:07:15.648 Metadata size: 8 bytes 00:07:15.648 Vector count 1 00:07:15.648 Module: software 00:07:15.648 Queue depth: 32 00:07:15.648 Allocate depth: 32 00:07:15.648 # threads/core: 1 00:07:15.648 Run time: 1 seconds 00:07:15.648 Verify: No 00:07:15.648 00:07:15.648 Running for 1 seconds... 00:07:15.648 00:07:15.648 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.648 ------------------------------------------------------------------------------------ 00:07:15.648 0,0 137760/s 546 MiB/s 0 0 00:07:15.648 ==================================================================================== 00:07:15.648 Total 137760/s 538 MiB/s 0 0' 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:15.648 21:53:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:15.648 21:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.648 21:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.648 21:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.648 21:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.648 21:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.648 21:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.648 21:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.648 21:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:15.648 [2024-07-26 21:53:26.495333] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:15.648 [2024-07-26 21:53:26.495421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021039 ] 00:07:15.648 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.648 [2024-07-26 21:53:26.581949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.648 [2024-07-26 21:53:26.616719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=0x1 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=dif_verify 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=software 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=32 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=32 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=1 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val=No 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:15.648 21:53:26 -- accel/accel.sh@21 -- # val= 00:07:15.648 21:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:15.648 21:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@21 -- # val= 00:07:16.587 21:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@21 -- # val= 00:07:16.587 21:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@21 -- # val= 00:07:16.587 21:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@21 -- # val= 00:07:16.587 21:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@21 -- # val= 00:07:16.587 21:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@21 -- # val= 00:07:16.587 21:53:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:16.587 21:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:16.587 21:53:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.587 21:53:27 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:16.587 21:53:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.587 00:07:16.587 real 0m2.637s 00:07:16.587 user 0m2.364s 00:07:16.587 sys 0m0.283s 00:07:16.587 21:53:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.587 21:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:16.587 ************************************ 00:07:16.587 END TEST accel_dif_verify 00:07:16.587 ************************************ 00:07:16.847 21:53:27 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:16.847 21:53:27 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:16.847 21:53:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.847 21:53:27 -- common/autotest_common.sh@10 -- # set +x 00:07:16.847 ************************************ 00:07:16.847 START TEST accel_dif_generate 00:07:16.847 ************************************ 00:07:16.847 21:53:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:16.847 21:53:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.847 21:53:27 -- accel/accel.sh@17 -- # local accel_module 00:07:16.847 21:53:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:16.847 21:53:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:16.847 21:53:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.847 21:53:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.847 21:53:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.847 21:53:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.847 21:53:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.847 21:53:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.847 21:53:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.847 21:53:27 -- accel/accel.sh@42 -- # jq -r . 00:07:16.847 [2024-07-26 21:53:27.863025] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:16.847 [2024-07-26 21:53:27.863098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021328 ] 00:07:16.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.847 [2024-07-26 21:53:27.946134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.847 [2024-07-26 21:53:27.981131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.226 21:53:29 -- accel/accel.sh@18 -- # out=' 00:07:18.226 SPDK Configuration: 00:07:18.226 Core mask: 0x1 00:07:18.226 00:07:18.226 Accel Perf Configuration: 00:07:18.226 Workload Type: dif_generate 00:07:18.226 Vector size: 4096 bytes 00:07:18.226 Transfer size: 4096 bytes 00:07:18.226 Block size: 512 bytes 00:07:18.226 Metadata size: 8 bytes 00:07:18.226 Vector count 1 00:07:18.226 Module: software 00:07:18.226 Queue depth: 32 00:07:18.226 Allocate depth: 32 00:07:18.226 # threads/core: 1 00:07:18.226 Run time: 1 seconds 00:07:18.226 Verify: No 00:07:18.226 00:07:18.226 Running for 1 seconds... 00:07:18.226 00:07:18.226 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.226 ------------------------------------------------------------------------------------ 00:07:18.226 0,0 163488/s 648 MiB/s 0 0 00:07:18.226 ==================================================================================== 00:07:18.226 Total 163488/s 638 MiB/s 0 0' 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:18.226 21:53:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:18.226 21:53:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.226 21:53:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.226 21:53:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.226 21:53:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.226 21:53:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.226 21:53:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.226 21:53:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.226 21:53:29 -- accel/accel.sh@42 -- # jq -r . 00:07:18.226 [2024-07-26 21:53:29.174986] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:18.226 [2024-07-26 21:53:29.175054] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021594 ] 00:07:18.226 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.226 [2024-07-26 21:53:29.257561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.226 [2024-07-26 21:53:29.292095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=0x1 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=dif_generate 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=software 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=32 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=32 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=1 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val=No 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.226 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.226 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:18.226 21:53:29 -- accel/accel.sh@21 -- # val= 00:07:18.227 21:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.227 21:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:18.227 21:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 21:53:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 21:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 21:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 21:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 21:53:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 21:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.606 21:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.606 21:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.606 21:53:30 -- accel/accel.sh@21 -- # val= 00:07:19.606 21:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.607 21:53:30 -- accel/accel.sh@21 -- # val= 00:07:19.607 21:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.607 21:53:30 -- accel/accel.sh@21 -- # val= 00:07:19.607 21:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.607 21:53:30 -- accel/accel.sh@21 -- # val= 00:07:19.607 21:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:19.607 21:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:19.607 21:53:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.607 21:53:30 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:19.607 21:53:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.607 00:07:19.607 real 0m2.627s 00:07:19.607 user 0m2.351s 00:07:19.607 sys 0m0.286s 00:07:19.607 21:53:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.607 21:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:19.607 ************************************ 00:07:19.607 END TEST accel_dif_generate 00:07:19.607 ************************************ 00:07:19.607 21:53:30 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:19.607 21:53:30 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:19.607 21:53:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.607 21:53:30 -- common/autotest_common.sh@10 -- # set +x 00:07:19.607 ************************************ 00:07:19.607 START TEST accel_dif_generate_copy 00:07:19.607 ************************************ 00:07:19.607 21:53:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:19.607 21:53:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.607 21:53:30 -- accel/accel.sh@17 -- # local accel_module 00:07:19.607 21:53:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:19.607 21:53:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:19.607 21:53:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.607 21:53:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.607 21:53:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.607 21:53:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.607 21:53:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.607 21:53:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.607 21:53:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.607 21:53:30 -- accel/accel.sh@42 -- # jq -r . 00:07:19.607 [2024-07-26 21:53:30.539250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:19.607 [2024-07-26 21:53:30.539315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021877 ] 00:07:19.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.607 [2024-07-26 21:53:30.624964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.607 [2024-07-26 21:53:30.660088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.987 21:53:31 -- accel/accel.sh@18 -- # out=' 00:07:20.987 SPDK Configuration: 00:07:20.987 Core mask: 0x1 00:07:20.987 00:07:20.987 Accel Perf Configuration: 00:07:20.987 Workload Type: dif_generate_copy 00:07:20.987 Vector size: 4096 bytes 00:07:20.987 Transfer size: 4096 bytes 00:07:20.987 Vector count 1 00:07:20.987 Module: software 00:07:20.987 Queue depth: 32 00:07:20.987 Allocate depth: 32 00:07:20.987 # threads/core: 1 00:07:20.987 Run time: 1 seconds 00:07:20.987 Verify: No 00:07:20.987 00:07:20.987 Running for 1 seconds... 00:07:20.987 00:07:20.987 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.987 ------------------------------------------------------------------------------------ 00:07:20.987 0,0 125504/s 497 MiB/s 0 0 00:07:20.987 ==================================================================================== 00:07:20.987 Total 125504/s 490 MiB/s 0 0' 00:07:20.987 21:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:20.987 21:53:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:20.987 21:53:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.987 21:53:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.987 21:53:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.987 21:53:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.987 21:53:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.987 21:53:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.987 21:53:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.987 21:53:31 -- accel/accel.sh@42 -- # jq -r . 00:07:20.987 [2024-07-26 21:53:31.855101] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:20.987 [2024-07-26 21:53:31.855166] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022153 ] 00:07:20.987 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.987 [2024-07-26 21:53:31.938906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.987 [2024-07-26 21:53:31.973250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=0x1 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=software 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=32 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=32 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=1 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val=No 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:20.987 21:53:32 -- accel/accel.sh@21 -- # val= 00:07:20.987 21:53:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # IFS=: 00:07:20.987 21:53:32 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@21 -- # val= 00:07:21.925 21:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@21 -- # val= 00:07:21.925 21:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@21 -- # val= 00:07:21.925 21:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@21 -- # val= 00:07:21.925 21:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@21 -- # val= 00:07:21.925 21:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@21 -- # val= 00:07:21.925 21:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:21.925 21:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.925 21:53:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.925 21:53:33 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:21.925 21:53:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.925 00:07:21.925 real 0m2.636s 00:07:21.925 user 0m2.360s 00:07:21.925 sys 0m0.284s 00:07:21.925 21:53:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.925 21:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:21.925 ************************************ 00:07:21.925 END TEST accel_dif_generate_copy 00:07:21.925 ************************************ 00:07:22.185 21:53:33 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:22.185 21:53:33 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.185 21:53:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:22.185 21:53:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.185 21:53:33 -- common/autotest_common.sh@10 -- # set +x 00:07:22.185 ************************************ 00:07:22.185 START TEST accel_comp 00:07:22.185 ************************************ 00:07:22.185 21:53:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.185 21:53:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.185 21:53:33 -- accel/accel.sh@17 -- # local accel_module 00:07:22.185 21:53:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.185 21:53:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.185 21:53:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.185 21:53:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.185 21:53:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.185 21:53:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.185 21:53:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.185 21:53:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.185 21:53:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.185 21:53:33 -- accel/accel.sh@42 -- # jq -r . 00:07:22.185 [2024-07-26 21:53:33.221620] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:22.185 [2024-07-26 21:53:33.221832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022416 ] 00:07:22.185 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.185 [2024-07-26 21:53:33.306287] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.185 [2024-07-26 21:53:33.341480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.562 21:53:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:23.562 00:07:23.562 SPDK Configuration: 00:07:23.562 Core mask: 0x1 00:07:23.562 00:07:23.562 Accel Perf Configuration: 00:07:23.562 Workload Type: compress 00:07:23.562 Transfer size: 4096 bytes 00:07:23.562 Vector count 1 00:07:23.562 Module: software 00:07:23.562 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.562 Queue depth: 32 00:07:23.562 Allocate depth: 32 00:07:23.562 # threads/core: 1 00:07:23.562 Run time: 1 seconds 00:07:23.562 Verify: No 00:07:23.562 00:07:23.562 Running for 1 seconds... 00:07:23.562 00:07:23.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.562 ------------------------------------------------------------------------------------ 00:07:23.562 0,0 65024/s 271 MiB/s 0 0 00:07:23.562 ==================================================================================== 00:07:23.562 Total 65024/s 254 MiB/s 0 0' 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.562 21:53:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.562 21:53:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.562 21:53:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.562 21:53:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.562 21:53:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.562 21:53:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.562 21:53:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.562 21:53:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.562 21:53:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.562 21:53:34 -- accel/accel.sh@42 -- # jq -r . 00:07:23.562 [2024-07-26 21:53:34.536098] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:23.562 [2024-07-26 21:53:34.536168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022568 ] 00:07:23.562 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.562 [2024-07-26 21:53:34.621062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.562 [2024-07-26 21:53:34.655555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.562 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.562 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.562 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.562 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.562 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.562 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.562 21:53:34 -- accel/accel.sh@21 -- # val=0x1 00:07:23.562 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.562 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.562 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.562 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=compress 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=software 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=32 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=32 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=1 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val=No 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:23.563 21:53:34 -- accel/accel.sh@21 -- # val= 00:07:23.563 21:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:23.563 21:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@21 -- # val= 00:07:24.940 21:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@21 -- # val= 00:07:24.940 21:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@21 -- # val= 00:07:24.940 21:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@21 -- # val= 00:07:24.940 21:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@21 -- # val= 00:07:24.940 21:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@21 -- # val= 00:07:24.940 21:53:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:24.940 21:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.940 21:53:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.940 21:53:35 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:24.940 21:53:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.940 00:07:24.940 real 0m2.634s 00:07:24.940 user 0m2.366s 00:07:24.940 sys 0m0.279s 00:07:24.940 21:53:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.940 21:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:24.940 ************************************ 00:07:24.940 END TEST accel_comp 00:07:24.940 ************************************ 00:07:24.940 21:53:35 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.940 21:53:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:24.940 21:53:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.940 21:53:35 -- common/autotest_common.sh@10 -- # set +x 00:07:24.940 ************************************ 00:07:24.940 START TEST accel_decomp 00:07:24.940 ************************************ 00:07:24.940 21:53:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.940 21:53:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.940 21:53:35 -- accel/accel.sh@17 -- # local accel_module 00:07:24.940 21:53:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.940 21:53:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:24.941 21:53:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.941 21:53:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.941 21:53:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.941 21:53:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.941 21:53:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.941 21:53:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.941 21:53:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.941 21:53:35 -- accel/accel.sh@42 -- # jq -r . 00:07:24.941 [2024-07-26 21:53:35.906301] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:24.941 [2024-07-26 21:53:35.906364] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2022762 ] 00:07:24.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.941 [2024-07-26 21:53:35.990849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.941 [2024-07-26 21:53:36.026153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.317 21:53:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:26.317 00:07:26.317 SPDK Configuration: 00:07:26.317 Core mask: 0x1 00:07:26.317 00:07:26.317 Accel Perf Configuration: 00:07:26.317 Workload Type: decompress 00:07:26.317 Transfer size: 4096 bytes 00:07:26.317 Vector count 1 00:07:26.317 Module: software 00:07:26.317 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.317 Queue depth: 32 00:07:26.317 Allocate depth: 32 00:07:26.317 # threads/core: 1 00:07:26.317 Run time: 1 seconds 00:07:26.317 Verify: Yes 00:07:26.317 00:07:26.317 Running for 1 seconds... 00:07:26.317 00:07:26.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.317 ------------------------------------------------------------------------------------ 00:07:26.317 0,0 87232/s 160 MiB/s 0 0 00:07:26.317 ==================================================================================== 00:07:26.317 Total 87232/s 340 MiB/s 0 0' 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:26.317 21:53:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:26.317 21:53:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.317 21:53:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.317 21:53:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.317 21:53:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.317 21:53:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.317 21:53:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.317 21:53:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.317 21:53:37 -- accel/accel.sh@42 -- # jq -r . 00:07:26.317 [2024-07-26 21:53:37.221566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:26.317 [2024-07-26 21:53:37.221643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023011 ] 00:07:26.317 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.317 [2024-07-26 21:53:37.305512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.317 [2024-07-26 21:53:37.340400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=0x1 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=decompress 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=software 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=32 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=32 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val=1 00:07:26.317 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.317 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.317 21:53:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.318 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.318 21:53:37 -- accel/accel.sh@21 -- # val=Yes 00:07:26.318 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.318 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.318 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:26.318 21:53:37 -- accel/accel.sh@21 -- # val= 00:07:26.318 21:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:26.318 21:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@21 -- # val= 00:07:27.696 21:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@21 -- # val= 00:07:27.696 21:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@21 -- # val= 00:07:27.696 21:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@21 -- # val= 00:07:27.696 21:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@21 -- # val= 00:07:27.696 21:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@21 -- # val= 00:07:27.696 21:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:27.696 21:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.696 21:53:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.696 21:53:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:27.696 21:53:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.696 00:07:27.696 real 0m2.637s 00:07:27.696 user 0m2.361s 00:07:27.696 sys 0m0.287s 00:07:27.696 21:53:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.696 21:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.696 ************************************ 00:07:27.696 END TEST accel_decomp 00:07:27.696 ************************************ 00:07:27.696 21:53:38 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:27.696 21:53:38 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:27.696 21:53:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.696 21:53:38 -- common/autotest_common.sh@10 -- # set +x 00:07:27.696 ************************************ 00:07:27.696 START TEST accel_decmop_full 00:07:27.696 ************************************ 00:07:27.696 21:53:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:27.696 21:53:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.696 21:53:38 -- accel/accel.sh@17 -- # local accel_module 00:07:27.696 21:53:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:27.696 21:53:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:27.696 21:53:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.696 21:53:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.696 21:53:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.696 21:53:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.696 21:53:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.696 21:53:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.696 21:53:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.696 21:53:38 -- accel/accel.sh@42 -- # jq -r . 00:07:27.696 [2024-07-26 21:53:38.591220] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:27.696 [2024-07-26 21:53:38.591309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023292 ] 00:07:27.696 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.696 [2024-07-26 21:53:38.678436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.696 [2024-07-26 21:53:38.713676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.074 21:53:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:29.074 00:07:29.074 SPDK Configuration: 00:07:29.074 Core mask: 0x1 00:07:29.074 00:07:29.074 Accel Perf Configuration: 00:07:29.074 Workload Type: decompress 00:07:29.074 Transfer size: 111250 bytes 00:07:29.074 Vector count 1 00:07:29.074 Module: software 00:07:29.074 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.074 Queue depth: 32 00:07:29.074 Allocate depth: 32 00:07:29.074 # threads/core: 1 00:07:29.074 Run time: 1 seconds 00:07:29.074 Verify: Yes 00:07:29.074 00:07:29.074 Running for 1 seconds... 00:07:29.074 00:07:29.074 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:29.074 ------------------------------------------------------------------------------------ 00:07:29.074 0,0 5664/s 233 MiB/s 0 0 00:07:29.074 ==================================================================================== 00:07:29.074 Total 5664/s 600 MiB/s 0 0' 00:07:29.074 21:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.074 21:53:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:29.074 21:53:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.074 21:53:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.074 21:53:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.074 21:53:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.074 21:53:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.074 21:53:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.074 21:53:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.074 21:53:39 -- accel/accel.sh@42 -- # jq -r . 00:07:29.074 [2024-07-26 21:53:39.918722] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:29.074 [2024-07-26 21:53:39.918815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023564 ] 00:07:29.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.074 [2024-07-26 21:53:40.004339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.074 [2024-07-26 21:53:40.043429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=0x1 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=decompress 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=software 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=32 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=32 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=1 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val=Yes 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:29.074 21:53:40 -- accel/accel.sh@21 -- # val= 00:07:29.074 21:53:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # IFS=: 00:07:29.074 21:53:40 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@21 -- # val= 00:07:30.011 21:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@21 -- # val= 00:07:30.011 21:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@21 -- # val= 00:07:30.011 21:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@21 -- # val= 00:07:30.011 21:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@21 -- # val= 00:07:30.011 21:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@21 -- # val= 00:07:30.011 21:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:30.011 21:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.011 21:53:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.011 21:53:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:30.011 21:53:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.011 00:07:30.011 real 0m2.662s 00:07:30.011 user 0m2.381s 00:07:30.011 sys 0m0.288s 00:07:30.011 21:53:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.011 21:53:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.011 ************************************ 00:07:30.011 END TEST accel_decmop_full 00:07:30.011 ************************************ 00:07:30.270 21:53:41 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.270 21:53:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:30.270 21:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.270 21:53:41 -- common/autotest_common.sh@10 -- # set +x 00:07:30.270 ************************************ 00:07:30.270 START TEST accel_decomp_mcore 00:07:30.270 ************************************ 00:07:30.270 21:53:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.270 21:53:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.270 21:53:41 -- accel/accel.sh@17 -- # local accel_module 00:07:30.270 21:53:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.270 21:53:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:30.270 21:53:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.270 21:53:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.270 21:53:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.270 21:53:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.270 21:53:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.270 21:53:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.270 21:53:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.270 21:53:41 -- accel/accel.sh@42 -- # jq -r . 00:07:30.270 [2024-07-26 21:53:41.303739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:30.270 [2024-07-26 21:53:41.303831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2023852 ] 00:07:30.270 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.270 [2024-07-26 21:53:41.388669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.270 [2024-07-26 21:53:41.426533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.270 [2024-07-26 21:53:41.426556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.270 [2024-07-26 21:53:41.426647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.270 [2024-07-26 21:53:41.426649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.649 21:53:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.649 00:07:31.649 SPDK Configuration: 00:07:31.649 Core mask: 0xf 00:07:31.649 00:07:31.649 Accel Perf Configuration: 00:07:31.649 Workload Type: decompress 00:07:31.649 Transfer size: 4096 bytes 00:07:31.649 Vector count 1 00:07:31.649 Module: software 00:07:31.649 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.649 Queue depth: 32 00:07:31.649 Allocate depth: 32 00:07:31.649 # threads/core: 1 00:07:31.649 Run time: 1 seconds 00:07:31.649 Verify: Yes 00:07:31.649 00:07:31.649 Running for 1 seconds... 00:07:31.649 00:07:31.649 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.649 ------------------------------------------------------------------------------------ 00:07:31.649 0,0 74176/s 136 MiB/s 0 0 00:07:31.649 3,0 74720/s 137 MiB/s 0 0 00:07:31.649 2,0 74432/s 137 MiB/s 0 0 00:07:31.649 1,0 74112/s 136 MiB/s 0 0 00:07:31.649 ==================================================================================== 00:07:31.649 Total 297440/s 1161 MiB/s 0 0' 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.649 21:53:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:31.649 21:53:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.649 21:53:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.649 21:53:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.649 21:53:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.649 21:53:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.649 21:53:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.649 21:53:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.649 21:53:42 -- accel/accel.sh@42 -- # jq -r . 00:07:31.649 [2024-07-26 21:53:42.629720] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:31.649 [2024-07-26 21:53:42.629790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024123 ] 00:07:31.649 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.649 [2024-07-26 21:53:42.712957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.649 [2024-07-26 21:53:42.749734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.649 [2024-07-26 21:53:42.749832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.649 [2024-07-26 21:53:42.749906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.649 [2024-07-26 21:53:42.749908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=0xf 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=decompress 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=software 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=32 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=32 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=1 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.649 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.649 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.649 21:53:42 -- accel/accel.sh@21 -- # val=Yes 00:07:31.650 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.650 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.650 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.650 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.650 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.650 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.650 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:31.650 21:53:42 -- accel/accel.sh@21 -- # val= 00:07:31.650 21:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.650 21:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:31.650 21:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@21 -- # val= 00:07:33.092 21:53:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:33.092 21:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:33.092 21:53:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:33.092 21:53:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:33.092 21:53:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.092 00:07:33.092 real 0m2.660s 00:07:33.092 user 0m9.041s 00:07:33.092 sys 0m0.290s 00:07:33.092 21:53:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.092 21:53:43 -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 ************************************ 00:07:33.092 END TEST accel_decomp_mcore 00:07:33.092 ************************************ 00:07:33.092 21:53:43 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.092 21:53:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:33.092 21:53:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.092 21:53:43 -- common/autotest_common.sh@10 -- # set +x 00:07:33.092 ************************************ 00:07:33.092 START TEST accel_decomp_full_mcore 00:07:33.092 ************************************ 00:07:33.092 21:53:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.092 21:53:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.092 21:53:43 -- accel/accel.sh@17 -- # local accel_module 00:07:33.092 21:53:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.092 21:53:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.092 21:53:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.092 21:53:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.092 21:53:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.092 21:53:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.092 21:53:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.092 21:53:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.092 21:53:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.092 21:53:43 -- accel/accel.sh@42 -- # jq -r . 00:07:33.092 [2024-07-26 21:53:44.009268] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:33.092 [2024-07-26 21:53:44.009347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024394 ] 00:07:33.092 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.092 [2024-07-26 21:53:44.094527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.092 [2024-07-26 21:53:44.132203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.092 [2024-07-26 21:53:44.132299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.092 [2024-07-26 21:53:44.132383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.092 [2024-07-26 21:53:44.132385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.472 21:53:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.472 00:07:34.472 SPDK Configuration: 00:07:34.472 Core mask: 0xf 00:07:34.472 00:07:34.472 Accel Perf Configuration: 00:07:34.472 Workload Type: decompress 00:07:34.472 Transfer size: 111250 bytes 00:07:34.472 Vector count 1 00:07:34.472 Module: software 00:07:34.472 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:34.472 Queue depth: 32 00:07:34.472 Allocate depth: 32 00:07:34.472 # threads/core: 1 00:07:34.472 Run time: 1 seconds 00:07:34.472 Verify: Yes 00:07:34.472 00:07:34.472 Running for 1 seconds... 00:07:34.472 00:07:34.472 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.472 ------------------------------------------------------------------------------------ 00:07:34.472 0,0 5408/s 223 MiB/s 0 0 00:07:34.472 3,0 5728/s 236 MiB/s 0 0 00:07:34.472 2,0 5728/s 236 MiB/s 0 0 00:07:34.472 1,0 5728/s 236 MiB/s 0 0 00:07:34.472 ==================================================================================== 00:07:34.472 Total 22592/s 2396 MiB/s 0 0' 00:07:34.472 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.472 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.472 21:53:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.472 21:53:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:34.473 21:53:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.473 21:53:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.473 21:53:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.473 21:53:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.473 21:53:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.473 21:53:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.473 21:53:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.473 21:53:45 -- accel/accel.sh@42 -- # jq -r . 00:07:34.473 [2024-07-26 21:53:45.343065] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.473 [2024-07-26 21:53:45.343134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024567 ] 00:07:34.473 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.473 [2024-07-26 21:53:45.427463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.473 [2024-07-26 21:53:45.464641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.473 [2024-07-26 21:53:45.464701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.473 [2024-07-26 21:53:45.464783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.473 [2024-07-26 21:53:45.464785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=0xf 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=decompress 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=software 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=32 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=32 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=1 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val=Yes 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:34.473 21:53:45 -- accel/accel.sh@21 -- # val= 00:07:34.473 21:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:34.473 21:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@21 -- # val= 00:07:35.852 21:53:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # IFS=: 00:07:35.852 21:53:46 -- accel/accel.sh@20 -- # read -r var val 00:07:35.852 21:53:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.852 21:53:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:35.852 21:53:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.852 00:07:35.852 real 0m2.673s 00:07:35.852 user 0m9.095s 00:07:35.852 sys 0m0.300s 00:07:35.852 21:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.852 21:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.852 ************************************ 00:07:35.852 END TEST accel_decomp_full_mcore 00:07:35.852 ************************************ 00:07:35.852 21:53:46 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.852 21:53:46 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:35.852 21:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.852 21:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:35.852 ************************************ 00:07:35.852 START TEST accel_decomp_mthread 00:07:35.852 ************************************ 00:07:35.852 21:53:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.852 21:53:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.852 21:53:46 -- accel/accel.sh@17 -- # local accel_module 00:07:35.852 21:53:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.852 21:53:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:35.852 21:53:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.852 21:53:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.852 21:53:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.852 21:53:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.852 21:53:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.852 21:53:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.852 21:53:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.852 21:53:46 -- accel/accel.sh@42 -- # jq -r . 00:07:35.852 [2024-07-26 21:53:46.730195] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.852 [2024-07-26 21:53:46.730268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024770 ] 00:07:35.852 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.852 [2024-07-26 21:53:46.814774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.852 [2024-07-26 21:53:46.849924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.232 21:53:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:37.232 00:07:37.232 SPDK Configuration: 00:07:37.232 Core mask: 0x1 00:07:37.232 00:07:37.232 Accel Perf Configuration: 00:07:37.232 Workload Type: decompress 00:07:37.232 Transfer size: 4096 bytes 00:07:37.232 Vector count 1 00:07:37.232 Module: software 00:07:37.232 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:37.232 Queue depth: 32 00:07:37.232 Allocate depth: 32 00:07:37.232 # threads/core: 2 00:07:37.232 Run time: 1 seconds 00:07:37.232 Verify: Yes 00:07:37.232 00:07:37.232 Running for 1 seconds... 00:07:37.232 00:07:37.232 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:37.232 ------------------------------------------------------------------------------------ 00:07:37.232 0,1 44448/s 81 MiB/s 0 0 00:07:37.232 0,0 44288/s 81 MiB/s 0 0 00:07:37.232 ==================================================================================== 00:07:37.232 Total 88736/s 346 MiB/s 0 0' 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.232 21:53:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:37.232 21:53:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.232 21:53:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.232 21:53:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.232 21:53:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.232 21:53:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.232 21:53:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.232 21:53:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.232 21:53:48 -- accel/accel.sh@42 -- # jq -r . 00:07:37.232 [2024-07-26 21:53:48.050107] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:37.232 [2024-07-26 21:53:48.050171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024996 ] 00:07:37.232 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.232 [2024-07-26 21:53:48.135214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.232 [2024-07-26 21:53:48.170565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=0x1 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=decompress 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=software 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=32 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=32 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=2 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val=Yes 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:37.232 21:53:48 -- accel/accel.sh@21 -- # val= 00:07:37.232 21:53:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # IFS=: 00:07:37.232 21:53:48 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@21 -- # val= 00:07:38.171 21:53:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 21:53:49 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 21:53:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.171 21:53:49 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:38.171 21:53:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.171 00:07:38.171 real 0m2.645s 00:07:38.171 user 0m2.373s 00:07:38.171 sys 0m0.283s 00:07:38.171 21:53:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.171 21:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.171 ************************************ 00:07:38.171 END TEST accel_decomp_mthread 00:07:38.171 ************************************ 00:07:38.171 21:53:49 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.171 21:53:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:38.171 21:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.171 21:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:38.171 ************************************ 00:07:38.171 START TEST accel_deomp_full_mthread 00:07:38.171 ************************************ 00:07:38.431 21:53:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.431 21:53:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.431 21:53:49 -- accel/accel.sh@17 -- # local accel_module 00:07:38.431 21:53:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.431 21:53:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.431 21:53:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.431 21:53:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.431 21:53:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.431 21:53:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.431 21:53:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.431 21:53:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.431 21:53:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.431 21:53:49 -- accel/accel.sh@42 -- # jq -r . 00:07:38.431 [2024-07-26 21:53:49.425280] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:38.431 [2024-07-26 21:53:49.425351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025283 ] 00:07:38.431 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.431 [2024-07-26 21:53:49.510604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.431 [2024-07-26 21:53:49.545452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.811 21:53:50 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.811 00:07:39.811 SPDK Configuration: 00:07:39.811 Core mask: 0x1 00:07:39.811 00:07:39.811 Accel Perf Configuration: 00:07:39.811 Workload Type: decompress 00:07:39.811 Transfer size: 111250 bytes 00:07:39.811 Vector count 1 00:07:39.811 Module: software 00:07:39.811 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.811 Queue depth: 32 00:07:39.811 Allocate depth: 32 00:07:39.811 # threads/core: 2 00:07:39.811 Run time: 1 seconds 00:07:39.811 Verify: Yes 00:07:39.811 00:07:39.811 Running for 1 seconds... 00:07:39.811 00:07:39.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.811 ------------------------------------------------------------------------------------ 00:07:39.811 0,1 2944/s 121 MiB/s 0 0 00:07:39.811 0,0 2880/s 118 MiB/s 0 0 00:07:39.811 ==================================================================================== 00:07:39.811 Total 5824/s 617 MiB/s 0 0' 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.811 21:53:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:39.811 21:53:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.811 21:53:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.811 21:53:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.811 21:53:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.811 21:53:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.811 21:53:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.811 21:53:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.811 21:53:50 -- accel/accel.sh@42 -- # jq -r . 00:07:39.811 [2024-07-26 21:53:50.762402] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.811 [2024-07-26 21:53:50.762469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025549 ] 00:07:39.811 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.811 [2024-07-26 21:53:50.844459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.811 [2024-07-26 21:53:50.878686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val=0x1 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val=decompress 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val=software 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.811 21:53:50 -- accel/accel.sh@21 -- # val=32 00:07:39.811 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.811 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.812 21:53:50 -- accel/accel.sh@21 -- # val=32 00:07:39.812 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.812 21:53:50 -- accel/accel.sh@21 -- # val=2 00:07:39.812 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.812 21:53:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.812 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.812 21:53:50 -- accel/accel.sh@21 -- # val=Yes 00:07:39.812 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.812 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.812 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:39.812 21:53:50 -- accel/accel.sh@21 -- # val= 00:07:39.812 21:53:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # IFS=: 00:07:39.812 21:53:50 -- accel/accel.sh@20 -- # read -r var val 00:07:41.192 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.192 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.192 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.192 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.192 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.192 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.192 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.192 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.192 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.192 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.192 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.193 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 21:53:52 -- accel/accel.sh@21 -- # val= 00:07:41.193 21:53:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.193 21:53:52 -- accel/accel.sh@20 -- # IFS=: 00:07:41.193 21:53:52 -- accel/accel.sh@20 -- # read -r var val 00:07:41.193 21:53:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:41.193 21:53:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:41.193 21:53:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.193 00:07:41.193 real 0m2.676s 00:07:41.193 user 0m2.406s 00:07:41.193 sys 0m0.278s 00:07:41.193 21:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.193 21:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 ************************************ 00:07:41.193 END TEST accel_deomp_full_mthread 00:07:41.193 ************************************ 00:07:41.193 21:53:52 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:41.193 21:53:52 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.193 21:53:52 -- accel/accel.sh@129 -- # build_accel_config 00:07:41.193 21:53:52 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:41.193 21:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.193 21:53:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.193 21:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.193 21:53:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.193 21:53:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.193 21:53:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.193 21:53:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.193 21:53:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.193 21:53:52 -- accel/accel.sh@42 -- # jq -r . 00:07:41.193 ************************************ 00:07:41.193 START TEST accel_dif_functional_tests 00:07:41.193 ************************************ 00:07:41.193 21:53:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:41.193 [2024-07-26 21:53:52.163044] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:41.193 [2024-07-26 21:53:52.163098] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025833 ] 00:07:41.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.193 [2024-07-26 21:53:52.248463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.193 [2024-07-26 21:53:52.285913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.193 [2024-07-26 21:53:52.286009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.193 [2024-07-26 21:53:52.286010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.193 00:07:41.193 00:07:41.193 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.193 http://cunit.sourceforge.net/ 00:07:41.193 00:07:41.193 00:07:41.193 Suite: accel_dif 00:07:41.193 Test: verify: DIF generated, GUARD check ...passed 00:07:41.193 Test: verify: DIF generated, APPTAG check ...passed 00:07:41.193 Test: verify: DIF generated, REFTAG check ...passed 00:07:41.193 Test: verify: DIF not generated, GUARD check ...[2024-07-26 21:53:52.348738] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:41.193 [2024-07-26 21:53:52.348784] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:41.193 passed 00:07:41.193 Test: verify: DIF not generated, APPTAG check ...[2024-07-26 21:53:52.348814] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:41.193 [2024-07-26 21:53:52.348830] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:41.193 passed 00:07:41.193 Test: verify: DIF not generated, REFTAG check ...[2024-07-26 21:53:52.348848] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:41.193 [2024-07-26 21:53:52.348864] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:41.193 passed 00:07:41.193 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:41.193 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-26 21:53:52.348905] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:41.193 passed 00:07:41.193 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:41.193 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:41.193 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:41.193 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-26 21:53:52.349006] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:41.193 passed 00:07:41.193 Test: generate copy: DIF generated, GUARD check ...passed 00:07:41.193 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:41.193 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:41.193 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:41.193 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:41.193 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:41.193 Test: generate copy: iovecs-len validate ...[2024-07-26 21:53:52.349170] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:41.193 passed 00:07:41.193 Test: generate copy: buffer alignment validate ...passed 00:07:41.193 00:07:41.193 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.193 suites 1 1 n/a 0 0 00:07:41.193 tests 20 20 20 0 0 00:07:41.193 asserts 204 204 204 0 n/a 00:07:41.193 00:07:41.193 Elapsed time = 0.002 seconds 00:07:41.452 00:07:41.452 real 0m0.382s 00:07:41.452 user 0m0.548s 00:07:41.452 sys 0m0.173s 00:07:41.452 21:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.452 21:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.452 ************************************ 00:07:41.452 END TEST accel_dif_functional_tests 00:07:41.452 ************************************ 00:07:41.452 00:07:41.452 real 0m56.496s 00:07:41.452 user 1m3.517s 00:07:41.452 sys 0m7.683s 00:07:41.452 21:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.452 21:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.452 ************************************ 00:07:41.452 END TEST accel 00:07:41.452 ************************************ 00:07:41.452 21:53:52 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:41.452 21:53:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.452 21:53:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.452 21:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.452 ************************************ 00:07:41.452 START TEST accel_rpc 00:07:41.452 ************************************ 00:07:41.452 21:53:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:41.452 * Looking for test storage... 00:07:41.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:41.712 21:53:52 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:41.712 21:53:52 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2025907 00:07:41.712 21:53:52 -- accel/accel_rpc.sh@15 -- # waitforlisten 2025907 00:07:41.712 21:53:52 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:41.712 21:53:52 -- common/autotest_common.sh@819 -- # '[' -z 2025907 ']' 00:07:41.712 21:53:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.712 21:53:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.712 21:53:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.712 21:53:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.712 21:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:41.712 [2024-07-26 21:53:52.736566] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:41.713 [2024-07-26 21:53:52.736621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2025907 ] 00:07:41.713 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.713 [2024-07-26 21:53:52.821129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.713 [2024-07-26 21:53:52.857566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.713 [2024-07-26 21:53:52.857693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.648 21:53:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.648 21:53:53 -- common/autotest_common.sh@852 -- # return 0 00:07:42.648 21:53:53 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:42.648 21:53:53 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:42.649 21:53:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.649 21:53:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.649 21:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 ************************************ 00:07:42.649 START TEST accel_assign_opcode 00:07:42.649 ************************************ 00:07:42.649 21:53:53 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:42.649 21:53:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.649 21:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 [2024-07-26 21:53:53.531694] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:42.649 21:53:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:42.649 21:53:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.649 21:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 [2024-07-26 21:53:53.539718] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:42.649 21:53:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:42.649 21:53:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.649 21:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 21:53:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:42.649 21:53:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.649 21:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@42 -- # grep software 00:07:42.649 21:53:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.649 software 00:07:42.649 00:07:42.649 real 0m0.208s 00:07:42.649 user 0m0.031s 00:07:42.649 sys 0m0.014s 00:07:42.649 21:53:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.649 21:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:42.649 ************************************ 00:07:42.649 END TEST accel_assign_opcode 00:07:42.649 ************************************ 00:07:42.649 21:53:53 -- accel/accel_rpc.sh@55 -- # killprocess 2025907 00:07:42.649 21:53:53 -- common/autotest_common.sh@926 -- # '[' -z 2025907 ']' 00:07:42.649 21:53:53 -- common/autotest_common.sh@930 -- # kill -0 2025907 00:07:42.649 21:53:53 -- common/autotest_common.sh@931 -- # uname 00:07:42.649 21:53:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:42.649 21:53:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2025907 00:07:42.649 21:53:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:42.649 21:53:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:42.649 21:53:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2025907' 00:07:42.649 killing process with pid 2025907 00:07:42.649 21:53:53 -- common/autotest_common.sh@945 -- # kill 2025907 00:07:42.649 21:53:53 -- common/autotest_common.sh@950 -- # wait 2025907 00:07:42.908 00:07:42.908 real 0m1.526s 00:07:42.908 user 0m1.510s 00:07:42.908 sys 0m0.485s 00:07:42.908 21:53:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.908 21:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:42.908 ************************************ 00:07:42.908 END TEST accel_rpc 00:07:42.908 ************************************ 00:07:43.167 21:53:54 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.167 21:53:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.167 21:53:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.167 21:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:43.167 ************************************ 00:07:43.167 START TEST app_cmdline 00:07:43.167 ************************************ 00:07:43.167 21:53:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.167 * Looking for test storage... 00:07:43.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:43.167 21:53:54 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:43.167 21:53:54 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2026248 00:07:43.167 21:53:54 -- app/cmdline.sh@18 -- # waitforlisten 2026248 00:07:43.167 21:53:54 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:43.167 21:53:54 -- common/autotest_common.sh@819 -- # '[' -z 2026248 ']' 00:07:43.167 21:53:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.167 21:53:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.167 21:53:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.167 21:53:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.167 21:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:43.167 [2024-07-26 21:53:54.320091] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:43.167 [2024-07-26 21:53:54.320151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2026248 ] 00:07:43.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.426 [2024-07-26 21:53:54.407377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.426 [2024-07-26 21:53:54.444941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:43.426 [2024-07-26 21:53:54.445066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.994 21:53:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:43.994 21:53:55 -- common/autotest_common.sh@852 -- # return 0 00:07:43.994 21:53:55 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:44.254 { 00:07:44.254 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:44.254 "fields": { 00:07:44.254 "major": 24, 00:07:44.254 "minor": 1, 00:07:44.254 "patch": 1, 00:07:44.254 "suffix": "-pre", 00:07:44.254 "commit": "dbef7efac" 00:07:44.254 } 00:07:44.254 } 00:07:44.254 21:53:55 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:44.254 21:53:55 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:44.254 21:53:55 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:44.254 21:53:55 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:44.254 21:53:55 -- app/cmdline.sh@26 -- # sort 00:07:44.254 21:53:55 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:44.254 21:53:55 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:44.254 21:53:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:44.254 21:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.254 21:53:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:44.254 21:53:55 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:44.254 21:53:55 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:44.254 21:53:55 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.254 21:53:55 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.254 21:53:55 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.254 21:53:55 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:44.254 21:53:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.254 21:53:55 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:44.254 21:53:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.254 21:53:55 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:44.254 21:53:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.254 21:53:55 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:44.254 21:53:55 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.254 21:53:55 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.254 request: 00:07:44.254 { 00:07:44.254 "method": "env_dpdk_get_mem_stats", 00:07:44.254 "req_id": 1 00:07:44.254 } 00:07:44.254 Got JSON-RPC error response 00:07:44.254 response: 00:07:44.254 { 00:07:44.254 "code": -32601, 00:07:44.254 "message": "Method not found" 00:07:44.254 } 00:07:44.254 21:53:55 -- common/autotest_common.sh@643 -- # es=1 00:07:44.254 21:53:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.254 21:53:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:44.254 21:53:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.254 21:53:55 -- app/cmdline.sh@1 -- # killprocess 2026248 00:07:44.254 21:53:55 -- common/autotest_common.sh@926 -- # '[' -z 2026248 ']' 00:07:44.254 21:53:55 -- common/autotest_common.sh@930 -- # kill -0 2026248 00:07:44.254 21:53:55 -- common/autotest_common.sh@931 -- # uname 00:07:44.254 21:53:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:44.254 21:53:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2026248 00:07:44.513 21:53:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:44.513 21:53:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:44.513 21:53:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2026248' 00:07:44.513 killing process with pid 2026248 00:07:44.513 21:53:55 -- common/autotest_common.sh@945 -- # kill 2026248 00:07:44.513 21:53:55 -- common/autotest_common.sh@950 -- # wait 2026248 00:07:44.772 00:07:44.772 real 0m1.646s 00:07:44.772 user 0m1.894s 00:07:44.772 sys 0m0.483s 00:07:44.773 21:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.773 21:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.773 ************************************ 00:07:44.773 END TEST app_cmdline 00:07:44.773 ************************************ 00:07:44.773 21:53:55 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:44.773 21:53:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:44.773 21:53:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.773 21:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:44.773 ************************************ 00:07:44.773 START TEST version 00:07:44.773 ************************************ 00:07:44.773 21:53:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:44.773 * Looking for test storage... 00:07:44.773 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:44.773 21:53:55 -- app/version.sh@17 -- # get_header_version major 00:07:44.773 21:53:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:44.773 21:53:55 -- app/version.sh@14 -- # cut -f2 00:07:44.773 21:53:55 -- app/version.sh@14 -- # tr -d '"' 00:07:44.773 21:53:55 -- app/version.sh@17 -- # major=24 00:07:44.773 21:53:55 -- app/version.sh@18 -- # get_header_version minor 00:07:44.773 21:53:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:44.773 21:53:55 -- app/version.sh@14 -- # cut -f2 00:07:44.773 21:53:55 -- app/version.sh@14 -- # tr -d '"' 00:07:44.773 21:53:55 -- app/version.sh@18 -- # minor=1 00:07:44.773 21:53:55 -- app/version.sh@19 -- # get_header_version patch 00:07:44.773 21:53:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:44.773 21:53:55 -- app/version.sh@14 -- # tr -d '"' 00:07:44.773 21:53:55 -- app/version.sh@14 -- # cut -f2 00:07:44.773 21:53:55 -- app/version.sh@19 -- # patch=1 00:07:44.773 21:53:55 -- app/version.sh@20 -- # get_header_version suffix 00:07:44.773 21:53:55 -- app/version.sh@14 -- # cut -f2 00:07:44.773 21:53:55 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:44.773 21:53:55 -- app/version.sh@14 -- # tr -d '"' 00:07:44.773 21:53:55 -- app/version.sh@20 -- # suffix=-pre 00:07:44.773 21:53:55 -- app/version.sh@22 -- # version=24.1 00:07:44.773 21:53:55 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.773 21:53:55 -- app/version.sh@25 -- # version=24.1.1 00:07:44.773 21:53:55 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:44.773 21:53:55 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:44.773 21:53:55 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:45.032 21:53:56 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:45.032 21:53:56 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:45.032 00:07:45.032 real 0m0.159s 00:07:45.032 user 0m0.065s 00:07:45.032 sys 0m0.132s 00:07:45.032 21:53:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.033 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.033 ************************************ 00:07:45.033 END TEST version 00:07:45.033 ************************************ 00:07:45.033 21:53:56 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@204 -- # uname -s 00:07:45.033 21:53:56 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:45.033 21:53:56 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:45.033 21:53:56 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:45.033 21:53:56 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:45.033 21:53:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:45.033 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.033 21:53:56 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:45.033 21:53:56 -- spdk/autotest.sh@291 -- # '[' rdma = rdma ']' 00:07:45.033 21:53:56 -- spdk/autotest.sh@292 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:45.033 21:53:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:45.033 21:53:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.033 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.033 ************************************ 00:07:45.033 START TEST nvmf_rdma 00:07:45.033 ************************************ 00:07:45.033 21:53:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:45.033 * Looking for test storage... 00:07:45.033 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:45.033 21:53:56 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:45.033 21:53:56 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:45.033 21:53:56 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.033 21:53:56 -- nvmf/common.sh@7 -- # uname -s 00:07:45.033 21:53:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.033 21:53:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.033 21:53:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.033 21:53:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.033 21:53:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.033 21:53:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.033 21:53:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.033 21:53:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.033 21:53:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.033 21:53:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.033 21:53:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:45.033 21:53:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:45.033 21:53:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.033 21:53:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.033 21:53:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.033 21:53:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.033 21:53:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.033 21:53:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.033 21:53:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.033 21:53:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.033 21:53:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.033 21:53:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.033 21:53:56 -- paths/export.sh@5 -- # export PATH 00:07:45.033 21:53:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.033 21:53:56 -- nvmf/common.sh@46 -- # : 0 00:07:45.033 21:53:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:45.033 21:53:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:45.033 21:53:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.033 21:53:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.033 21:53:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:45.033 21:53:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:45.033 21:53:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:45.033 21:53:56 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:45.033 21:53:56 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:45.033 21:53:56 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:45.033 21:53:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:45.033 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.293 21:53:56 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:45.293 21:53:56 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:45.293 21:53:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:45.293 21:53:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.293 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.293 ************************************ 00:07:45.293 START TEST nvmf_example 00:07:45.293 ************************************ 00:07:45.293 21:53:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:45.293 * Looking for test storage... 00:07:45.293 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.293 21:53:56 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.293 21:53:56 -- nvmf/common.sh@7 -- # uname -s 00:07:45.293 21:53:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.293 21:53:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.293 21:53:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.293 21:53:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.293 21:53:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.293 21:53:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.293 21:53:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.293 21:53:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.293 21:53:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.293 21:53:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.293 21:53:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:45.293 21:53:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:45.293 21:53:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.293 21:53:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.293 21:53:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.293 21:53:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.293 21:53:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.293 21:53:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.293 21:53:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.293 21:53:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.293 21:53:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.293 21:53:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.293 21:53:56 -- paths/export.sh@5 -- # export PATH 00:07:45.293 21:53:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.293 21:53:56 -- nvmf/common.sh@46 -- # : 0 00:07:45.293 21:53:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:45.293 21:53:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:45.293 21:53:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:45.293 21:53:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.293 21:53:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.293 21:53:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:45.293 21:53:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:45.293 21:53:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:45.293 21:53:56 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:45.293 21:53:56 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:45.293 21:53:56 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:45.293 21:53:56 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:45.293 21:53:56 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:45.293 21:53:56 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:45.293 21:53:56 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:45.293 21:53:56 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:45.293 21:53:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:45.293 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:45.293 21:53:56 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:45.293 21:53:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:07:45.293 21:53:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.293 21:53:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:45.293 21:53:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:45.293 21:53:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:45.293 21:53:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.293 21:53:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.293 21:53:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.293 21:53:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:45.293 21:53:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:45.293 21:53:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:45.293 21:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:53.447 21:54:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:53.447 21:54:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:53.447 21:54:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:53.447 21:54:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:53.447 21:54:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:53.447 21:54:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:53.447 21:54:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:53.447 21:54:04 -- nvmf/common.sh@294 -- # net_devs=() 00:07:53.447 21:54:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:53.447 21:54:04 -- nvmf/common.sh@295 -- # e810=() 00:07:53.447 21:54:04 -- nvmf/common.sh@295 -- # local -ga e810 00:07:53.447 21:54:04 -- nvmf/common.sh@296 -- # x722=() 00:07:53.447 21:54:04 -- nvmf/common.sh@296 -- # local -ga x722 00:07:53.447 21:54:04 -- nvmf/common.sh@297 -- # mlx=() 00:07:53.447 21:54:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:53.447 21:54:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.447 21:54:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:53.447 21:54:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:07:53.447 21:54:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:07:53.447 21:54:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:07:53.447 21:54:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:53.447 21:54:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.447 21:54:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:53.447 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:53.447 21:54:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:53.447 21:54:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.447 21:54:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:53.447 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:53.447 21:54:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:07:53.447 21:54:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:53.447 21:54:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.447 21:54:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.447 21:54:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.447 21:54:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.447 21:54:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:53.447 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:53.447 21:54:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.447 21:54:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.447 21:54:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.447 21:54:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.447 21:54:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.447 21:54:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:53.447 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:53.447 21:54:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.447 21:54:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:53.447 21:54:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:53.447 21:54:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:07:53.447 21:54:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:07:53.447 21:54:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:07:53.448 21:54:04 -- nvmf/common.sh@57 -- # uname 00:07:53.448 21:54:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:07:53.448 21:54:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:07:53.448 21:54:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:07:53.448 21:54:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:07:53.448 21:54:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:07:53.448 21:54:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:07:53.448 21:54:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:07:53.448 21:54:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:07:53.448 21:54:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:07:53.448 21:54:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:53.448 21:54:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:07:53.448 21:54:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:53.448 21:54:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:53.448 21:54:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:53.448 21:54:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:53.448 21:54:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:53.448 21:54:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:53.448 21:54:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:53.448 21:54:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:53.448 21:54:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:53.448 21:54:04 -- nvmf/common.sh@104 -- # continue 2 00:07:53.448 21:54:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:53.448 21:54:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:53.448 21:54:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:53.448 21:54:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:53.448 21:54:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:53.448 21:54:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:53.448 21:54:04 -- nvmf/common.sh@104 -- # continue 2 00:07:53.448 21:54:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:53.448 21:54:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:07:53.448 21:54:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:53.448 21:54:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:53.448 21:54:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:53.448 21:54:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:53.448 21:54:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:07:53.448 21:54:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:07:53.448 21:54:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:07:53.448 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:53.448 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:53.448 altname enp217s0f0np0 00:07:53.448 altname ens818f0np0 00:07:53.448 inet 192.168.100.8/24 scope global mlx_0_0 00:07:53.448 valid_lft forever preferred_lft forever 00:07:53.448 21:54:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:07:53.448 21:54:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:07:53.448 21:54:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:53.448 21:54:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:53.448 21:54:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:53.448 21:54:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:53.448 21:54:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:07:53.448 21:54:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:07:53.448 21:54:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:07:53.448 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:53.448 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:53.448 altname enp217s0f1np1 00:07:53.448 altname ens818f1np1 00:07:53.448 inet 192.168.100.9/24 scope global mlx_0_1 00:07:53.448 valid_lft forever preferred_lft forever 00:07:53.448 21:54:04 -- nvmf/common.sh@410 -- # return 0 00:07:53.448 21:54:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:53.448 21:54:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:53.448 21:54:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:07:53.448 21:54:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:07:53.448 21:54:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:07:53.448 21:54:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:53.448 21:54:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:07:53.448 21:54:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:07:53.448 21:54:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:53.716 21:54:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:07:53.716 21:54:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:53.716 21:54:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:53.716 21:54:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:53.716 21:54:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:07:53.716 21:54:04 -- nvmf/common.sh@104 -- # continue 2 00:07:53.716 21:54:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:07:53.716 21:54:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:53.716 21:54:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:53.716 21:54:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:53.716 21:54:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:53.716 21:54:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:07:53.716 21:54:04 -- nvmf/common.sh@104 -- # continue 2 00:07:53.716 21:54:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:53.716 21:54:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:07:53.716 21:54:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:07:53.716 21:54:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:07:53.716 21:54:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:53.716 21:54:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:53.716 21:54:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:07:53.716 21:54:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:07:53.716 21:54:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:07:53.716 21:54:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:07:53.716 21:54:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:07:53.716 21:54:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:07:53.716 21:54:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:07:53.716 192.168.100.9' 00:07:53.716 21:54:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:07:53.716 192.168.100.9' 00:07:53.716 21:54:04 -- nvmf/common.sh@445 -- # head -n 1 00:07:53.716 21:54:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:53.716 21:54:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:53.716 192.168.100.9' 00:07:53.716 21:54:04 -- nvmf/common.sh@446 -- # tail -n +2 00:07:53.716 21:54:04 -- nvmf/common.sh@446 -- # head -n 1 00:07:53.716 21:54:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:53.716 21:54:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:07:53.716 21:54:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:53.716 21:54:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:07:53.716 21:54:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:07:53.716 21:54:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:07:53.716 21:54:04 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:53.716 21:54:04 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:53.716 21:54:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:53.716 21:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.716 21:54:04 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:53.716 21:54:04 -- target/nvmf_example.sh@34 -- # nvmfpid=2031169 00:07:53.716 21:54:04 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:53.716 21:54:04 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.716 21:54:04 -- target/nvmf_example.sh@36 -- # waitforlisten 2031169 00:07:53.716 21:54:04 -- common/autotest_common.sh@819 -- # '[' -z 2031169 ']' 00:07:53.716 21:54:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.716 21:54:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.716 21:54:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.716 21:54:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.716 21:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:53.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.652 21:54:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:54.652 21:54:05 -- common/autotest_common.sh@852 -- # return 0 00:07:54.652 21:54:05 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:54.652 21:54:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:54.652 21:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 21:54:05 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:54.652 21:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.652 21:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 21:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.652 21:54:05 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:54.652 21:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.652 21:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 21:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.652 21:54:05 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:54.652 21:54:05 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:54.652 21:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.652 21:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 21:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.652 21:54:05 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:54.652 21:54:05 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:54.652 21:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.652 21:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 21:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.652 21:54:05 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:54.652 21:54:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.652 21:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:54.652 21:54:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.652 21:54:05 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:54.652 21:54:05 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:54.911 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.123 Initializing NVMe Controllers 00:08:07.123 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:07.123 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:07.123 Initialization complete. Launching workers. 00:08:07.123 ======================================================== 00:08:07.123 Latency(us) 00:08:07.123 Device Information : IOPS MiB/s Average min max 00:08:07.123 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24904.39 97.28 2569.57 590.59 13006.64 00:08:07.123 ======================================================== 00:08:07.123 Total : 24904.39 97.28 2569.57 590.59 13006.64 00:08:07.123 00:08:07.123 21:54:17 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:07.123 21:54:17 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:07.123 21:54:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:07.123 21:54:17 -- nvmf/common.sh@116 -- # sync 00:08:07.123 21:54:17 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:07.123 21:54:17 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:07.123 21:54:17 -- nvmf/common.sh@119 -- # set +e 00:08:07.123 21:54:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:07.123 21:54:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:07.123 rmmod nvme_rdma 00:08:07.123 rmmod nvme_fabrics 00:08:07.123 21:54:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:07.123 21:54:17 -- nvmf/common.sh@123 -- # set -e 00:08:07.123 21:54:17 -- nvmf/common.sh@124 -- # return 0 00:08:07.123 21:54:17 -- nvmf/common.sh@477 -- # '[' -n 2031169 ']' 00:08:07.123 21:54:17 -- nvmf/common.sh@478 -- # killprocess 2031169 00:08:07.123 21:54:17 -- common/autotest_common.sh@926 -- # '[' -z 2031169 ']' 00:08:07.123 21:54:17 -- common/autotest_common.sh@930 -- # kill -0 2031169 00:08:07.123 21:54:17 -- common/autotest_common.sh@931 -- # uname 00:08:07.123 21:54:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:07.123 21:54:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2031169 00:08:07.123 21:54:17 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:07.123 21:54:17 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:07.123 21:54:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2031169' 00:08:07.123 killing process with pid 2031169 00:08:07.123 21:54:17 -- common/autotest_common.sh@945 -- # kill 2031169 00:08:07.123 21:54:17 -- common/autotest_common.sh@950 -- # wait 2031169 00:08:07.123 nvmf threads initialize successfully 00:08:07.123 bdev subsystem init successfully 00:08:07.123 created a nvmf target service 00:08:07.123 create targets's poll groups done 00:08:07.123 all subsystems of target started 00:08:07.123 nvmf target is running 00:08:07.123 all subsystems of target stopped 00:08:07.123 destroy targets's poll groups done 00:08:07.123 destroyed the nvmf target service 00:08:07.123 bdev subsystem finish successfully 00:08:07.123 nvmf threads destroy successfully 00:08:07.123 21:54:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:07.123 21:54:17 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:07.123 21:54:17 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:07.123 21:54:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:07.123 21:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 00:08:07.123 real 0m21.238s 00:08:07.123 user 0m52.567s 00:08:07.123 sys 0m6.885s 00:08:07.123 21:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.123 21:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 ************************************ 00:08:07.123 END TEST nvmf_example 00:08:07.123 ************************************ 00:08:07.123 21:54:17 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:07.123 21:54:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:07.123 21:54:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.123 21:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:07.123 ************************************ 00:08:07.123 START TEST nvmf_filesystem 00:08:07.123 ************************************ 00:08:07.123 21:54:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:07.123 * Looking for test storage... 00:08:07.123 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.123 21:54:17 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:07.123 21:54:17 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:07.123 21:54:17 -- common/autotest_common.sh@34 -- # set -e 00:08:07.123 21:54:17 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:07.123 21:54:17 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:07.123 21:54:17 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:07.123 21:54:17 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:07.123 21:54:17 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:07.123 21:54:17 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:07.123 21:54:17 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:07.123 21:54:17 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:07.123 21:54:17 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:07.123 21:54:17 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:07.123 21:54:17 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:07.123 21:54:17 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:07.123 21:54:17 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:07.123 21:54:17 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:07.123 21:54:17 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:07.123 21:54:17 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:07.123 21:54:17 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:07.123 21:54:17 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:07.123 21:54:17 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:07.123 21:54:17 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:07.123 21:54:17 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:07.123 21:54:17 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:07.123 21:54:17 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:07.123 21:54:17 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:07.123 21:54:17 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:07.123 21:54:17 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:07.123 21:54:17 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:07.123 21:54:17 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:07.123 21:54:17 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:07.123 21:54:17 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:07.123 21:54:17 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:07.123 21:54:17 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:07.123 21:54:17 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:07.123 21:54:17 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:07.123 21:54:17 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:07.123 21:54:17 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:07.124 21:54:17 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:07.124 21:54:17 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:07.124 21:54:17 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:07.124 21:54:17 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:07.124 21:54:17 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:07.124 21:54:17 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:07.124 21:54:17 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:07.124 21:54:17 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:07.124 21:54:17 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:07.124 21:54:17 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:07.124 21:54:17 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:07.124 21:54:17 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:07.124 21:54:17 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:07.124 21:54:17 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:07.124 21:54:17 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:07.124 21:54:17 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:07.124 21:54:17 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:07.124 21:54:17 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:07.124 21:54:17 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:07.124 21:54:17 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:07.124 21:54:17 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:07.124 21:54:17 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:07.124 21:54:17 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:07.124 21:54:17 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:07.124 21:54:17 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:07.124 21:54:17 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:07.124 21:54:17 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:07.124 21:54:17 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:07.124 21:54:17 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:07.124 21:54:17 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:07.124 21:54:17 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:07.124 21:54:17 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:07.124 21:54:17 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:07.124 21:54:17 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:07.124 21:54:17 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:07.124 21:54:17 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:07.124 21:54:17 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:07.124 21:54:17 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:07.124 21:54:17 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:07.124 21:54:17 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:07.124 21:54:17 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:07.124 21:54:17 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:07.124 21:54:17 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:07.124 21:54:17 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:07.124 21:54:17 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:07.124 21:54:17 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:07.124 21:54:17 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:07.124 21:54:17 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:07.124 21:54:17 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:07.124 21:54:17 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:07.124 21:54:17 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:07.124 21:54:17 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:07.124 21:54:17 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:07.124 21:54:17 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:07.124 21:54:17 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:07.124 21:54:17 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:07.124 21:54:17 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:07.124 21:54:17 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:07.124 21:54:17 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:07.124 21:54:17 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:07.124 21:54:17 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:07.124 21:54:17 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:07.124 21:54:17 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:07.124 #define SPDK_CONFIG_H 00:08:07.124 #define SPDK_CONFIG_APPS 1 00:08:07.124 #define SPDK_CONFIG_ARCH native 00:08:07.124 #undef SPDK_CONFIG_ASAN 00:08:07.124 #undef SPDK_CONFIG_AVAHI 00:08:07.124 #undef SPDK_CONFIG_CET 00:08:07.124 #define SPDK_CONFIG_COVERAGE 1 00:08:07.124 #define SPDK_CONFIG_CROSS_PREFIX 00:08:07.124 #undef SPDK_CONFIG_CRYPTO 00:08:07.124 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:07.124 #undef SPDK_CONFIG_CUSTOMOCF 00:08:07.124 #undef SPDK_CONFIG_DAOS 00:08:07.124 #define SPDK_CONFIG_DAOS_DIR 00:08:07.124 #define SPDK_CONFIG_DEBUG 1 00:08:07.124 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:07.124 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:07.124 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:07.124 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:07.124 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:07.124 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:07.124 #define SPDK_CONFIG_EXAMPLES 1 00:08:07.124 #undef SPDK_CONFIG_FC 00:08:07.124 #define SPDK_CONFIG_FC_PATH 00:08:07.124 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:07.124 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:07.124 #undef SPDK_CONFIG_FUSE 00:08:07.124 #undef SPDK_CONFIG_FUZZER 00:08:07.124 #define SPDK_CONFIG_FUZZER_LIB 00:08:07.124 #undef SPDK_CONFIG_GOLANG 00:08:07.124 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:07.124 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:07.124 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:07.124 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:07.124 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:07.124 #define SPDK_CONFIG_IDXD 1 00:08:07.124 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:07.124 #undef SPDK_CONFIG_IPSEC_MB 00:08:07.124 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:07.124 #define SPDK_CONFIG_ISAL 1 00:08:07.124 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:07.124 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:07.124 #define SPDK_CONFIG_LIBDIR 00:08:07.124 #undef SPDK_CONFIG_LTO 00:08:07.124 #define SPDK_CONFIG_MAX_LCORES 00:08:07.124 #define SPDK_CONFIG_NVME_CUSE 1 00:08:07.124 #undef SPDK_CONFIG_OCF 00:08:07.124 #define SPDK_CONFIG_OCF_PATH 00:08:07.124 #define SPDK_CONFIG_OPENSSL_PATH 00:08:07.124 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:07.124 #undef SPDK_CONFIG_PGO_USE 00:08:07.124 #define SPDK_CONFIG_PREFIX /usr/local 00:08:07.124 #undef SPDK_CONFIG_RAID5F 00:08:07.124 #undef SPDK_CONFIG_RBD 00:08:07.124 #define SPDK_CONFIG_RDMA 1 00:08:07.124 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:07.124 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:07.124 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:07.124 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:07.124 #define SPDK_CONFIG_SHARED 1 00:08:07.124 #undef SPDK_CONFIG_SMA 00:08:07.124 #define SPDK_CONFIG_TESTS 1 00:08:07.124 #undef SPDK_CONFIG_TSAN 00:08:07.124 #define SPDK_CONFIG_UBLK 1 00:08:07.124 #define SPDK_CONFIG_UBSAN 1 00:08:07.124 #undef SPDK_CONFIG_UNIT_TESTS 00:08:07.124 #undef SPDK_CONFIG_URING 00:08:07.124 #define SPDK_CONFIG_URING_PATH 00:08:07.124 #undef SPDK_CONFIG_URING_ZNS 00:08:07.124 #undef SPDK_CONFIG_USDT 00:08:07.124 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:07.124 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:07.124 #undef SPDK_CONFIG_VFIO_USER 00:08:07.124 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:07.124 #define SPDK_CONFIG_VHOST 1 00:08:07.124 #define SPDK_CONFIG_VIRTIO 1 00:08:07.124 #undef SPDK_CONFIG_VTUNE 00:08:07.124 #define SPDK_CONFIG_VTUNE_DIR 00:08:07.124 #define SPDK_CONFIG_WERROR 1 00:08:07.124 #define SPDK_CONFIG_WPDK_DIR 00:08:07.124 #undef SPDK_CONFIG_XNVME 00:08:07.124 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:07.124 21:54:17 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:07.124 21:54:17 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.124 21:54:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.124 21:54:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.124 21:54:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.124 21:54:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.124 21:54:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.125 21:54:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.125 21:54:17 -- paths/export.sh@5 -- # export PATH 00:08:07.125 21:54:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.125 21:54:17 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:07.125 21:54:17 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:07.125 21:54:17 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:07.125 21:54:17 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:07.125 21:54:17 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:07.125 21:54:17 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:07.125 21:54:17 -- pm/common@16 -- # TEST_TAG=N/A 00:08:07.125 21:54:17 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:07.125 21:54:17 -- common/autotest_common.sh@52 -- # : 1 00:08:07.125 21:54:17 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:07.125 21:54:17 -- common/autotest_common.sh@56 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:07.125 21:54:17 -- common/autotest_common.sh@58 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:07.125 21:54:17 -- common/autotest_common.sh@60 -- # : 1 00:08:07.125 21:54:17 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:07.125 21:54:17 -- common/autotest_common.sh@62 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:07.125 21:54:17 -- common/autotest_common.sh@64 -- # : 00:08:07.125 21:54:17 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:07.125 21:54:17 -- common/autotest_common.sh@66 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:07.125 21:54:17 -- common/autotest_common.sh@68 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:07.125 21:54:17 -- common/autotest_common.sh@70 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:07.125 21:54:17 -- common/autotest_common.sh@72 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:07.125 21:54:17 -- common/autotest_common.sh@74 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:07.125 21:54:17 -- common/autotest_common.sh@76 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:07.125 21:54:17 -- common/autotest_common.sh@78 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:07.125 21:54:17 -- common/autotest_common.sh@80 -- # : 1 00:08:07.125 21:54:17 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:07.125 21:54:17 -- common/autotest_common.sh@82 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:07.125 21:54:17 -- common/autotest_common.sh@84 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:07.125 21:54:17 -- common/autotest_common.sh@86 -- # : 1 00:08:07.125 21:54:17 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:07.125 21:54:17 -- common/autotest_common.sh@88 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:07.125 21:54:17 -- common/autotest_common.sh@90 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:07.125 21:54:17 -- common/autotest_common.sh@92 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:07.125 21:54:17 -- common/autotest_common.sh@94 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:07.125 21:54:17 -- common/autotest_common.sh@96 -- # : rdma 00:08:07.125 21:54:17 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:07.125 21:54:17 -- common/autotest_common.sh@98 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:07.125 21:54:17 -- common/autotest_common.sh@100 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:07.125 21:54:17 -- common/autotest_common.sh@102 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:07.125 21:54:17 -- common/autotest_common.sh@104 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:07.125 21:54:17 -- common/autotest_common.sh@106 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:07.125 21:54:17 -- common/autotest_common.sh@108 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:07.125 21:54:17 -- common/autotest_common.sh@110 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:07.125 21:54:17 -- common/autotest_common.sh@112 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:07.125 21:54:17 -- common/autotest_common.sh@114 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:07.125 21:54:17 -- common/autotest_common.sh@116 -- # : 1 00:08:07.125 21:54:17 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:07.125 21:54:17 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:07.125 21:54:17 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:07.125 21:54:17 -- common/autotest_common.sh@120 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:07.125 21:54:17 -- common/autotest_common.sh@122 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:07.125 21:54:17 -- common/autotest_common.sh@124 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:07.125 21:54:17 -- common/autotest_common.sh@126 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:07.125 21:54:17 -- common/autotest_common.sh@128 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:07.125 21:54:17 -- common/autotest_common.sh@130 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:07.125 21:54:17 -- common/autotest_common.sh@132 -- # : v23.11 00:08:07.125 21:54:17 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:07.125 21:54:17 -- common/autotest_common.sh@134 -- # : true 00:08:07.125 21:54:17 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:07.125 21:54:17 -- common/autotest_common.sh@136 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:07.125 21:54:17 -- common/autotest_common.sh@138 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:07.125 21:54:17 -- common/autotest_common.sh@140 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:07.125 21:54:17 -- common/autotest_common.sh@142 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:07.125 21:54:17 -- common/autotest_common.sh@144 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:07.125 21:54:17 -- common/autotest_common.sh@146 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:07.125 21:54:17 -- common/autotest_common.sh@148 -- # : mlx5 00:08:07.125 21:54:17 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:07.125 21:54:17 -- common/autotest_common.sh@150 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:07.125 21:54:17 -- common/autotest_common.sh@152 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:07.125 21:54:17 -- common/autotest_common.sh@154 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:07.125 21:54:17 -- common/autotest_common.sh@156 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:07.125 21:54:17 -- common/autotest_common.sh@158 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:07.125 21:54:17 -- common/autotest_common.sh@160 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:07.125 21:54:17 -- common/autotest_common.sh@163 -- # : 00:08:07.125 21:54:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:07.125 21:54:17 -- common/autotest_common.sh@165 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:07.125 21:54:17 -- common/autotest_common.sh@167 -- # : 0 00:08:07.125 21:54:17 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:07.125 21:54:17 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:07.125 21:54:17 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:07.125 21:54:17 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:07.125 21:54:17 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:07.125 21:54:17 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.125 21:54:17 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.125 21:54:17 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.126 21:54:17 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:07.126 21:54:17 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.126 21:54:17 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.126 21:54:17 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:07.126 21:54:17 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:07.126 21:54:17 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:07.126 21:54:17 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:07.126 21:54:17 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.126 21:54:17 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.126 21:54:17 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.126 21:54:17 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.126 21:54:17 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:07.126 21:54:17 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:07.126 21:54:17 -- common/autotest_common.sh@196 -- # cat 00:08:07.126 21:54:17 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:07.126 21:54:17 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.126 21:54:17 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.126 21:54:17 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.126 21:54:17 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.126 21:54:17 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:07.126 21:54:17 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:07.126 21:54:17 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:07.126 21:54:17 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:07.126 21:54:17 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:07.126 21:54:17 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:07.126 21:54:17 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.126 21:54:17 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.126 21:54:17 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.126 21:54:17 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.126 21:54:17 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:07.126 21:54:17 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:07.126 21:54:17 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.126 21:54:17 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.126 21:54:17 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:07.126 21:54:17 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:07.126 21:54:17 -- common/autotest_common.sh@249 -- # valgrind= 00:08:07.126 21:54:17 -- common/autotest_common.sh@255 -- # uname -s 00:08:07.126 21:54:17 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:07.126 21:54:17 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:07.126 21:54:17 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:07.126 21:54:17 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:07.126 21:54:17 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:07.126 21:54:17 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:07.126 21:54:17 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:07.126 21:54:17 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j112 00:08:07.126 21:54:17 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:07.126 21:54:17 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:07.126 21:54:17 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:07.126 21:54:17 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:07.126 21:54:17 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:07.126 21:54:17 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:07.126 21:54:17 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:07.126 21:54:17 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=rdma 00:08:07.126 21:54:17 -- common/autotest_common.sh@309 -- # [[ -z 2033603 ]] 00:08:07.126 21:54:17 -- common/autotest_common.sh@309 -- # kill -0 2033603 00:08:07.126 21:54:17 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:07.126 21:54:17 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:07.126 21:54:17 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:07.126 21:54:17 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:07.126 21:54:17 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:07.126 21:54:17 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:07.126 21:54:17 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:07.126 21:54:17 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:07.126 21:54:17 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.APEK98 00:08:07.126 21:54:17 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:07.126 21:54:17 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:07.126 21:54:17 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:07.126 21:54:17 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.APEK98/tests/target /tmp/spdk.APEK98 00:08:07.126 21:54:17 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@318 -- # df -T 00:08:07.126 21:54:17 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=919109632 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4365320192 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=49378406400 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61742276608 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=12363870208 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=30817619968 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=12338671616 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12348456960 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=9785344 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=30865588224 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30871138304 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=5550080 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=6174220288 00:08:07.126 21:54:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6174224384 00:08:07.126 21:54:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:07.126 21:54:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:07.126 21:54:17 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:07.127 * Looking for test storage... 00:08:07.127 21:54:17 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:07.127 21:54:17 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:07.127 21:54:17 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.127 21:54:17 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:07.127 21:54:17 -- common/autotest_common.sh@363 -- # mount=/ 00:08:07.127 21:54:17 -- common/autotest_common.sh@365 -- # target_space=49378406400 00:08:07.127 21:54:17 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:07.127 21:54:17 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:07.127 21:54:17 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:07.127 21:54:17 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:07.127 21:54:17 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:07.127 21:54:17 -- common/autotest_common.sh@372 -- # new_size=14578462720 00:08:07.127 21:54:17 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:07.127 21:54:17 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.127 21:54:17 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.127 21:54:17 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.127 21:54:17 -- common/autotest_common.sh@380 -- # return 0 00:08:07.127 21:54:17 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:07.127 21:54:17 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:07.127 21:54:17 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:07.127 21:54:17 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:07.127 21:54:17 -- common/autotest_common.sh@1672 -- # true 00:08:07.127 21:54:17 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:07.127 21:54:17 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:07.127 21:54:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:07.127 21:54:17 -- common/autotest_common.sh@27 -- # exec 00:08:07.127 21:54:17 -- common/autotest_common.sh@29 -- # exec 00:08:07.127 21:54:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:07.127 21:54:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:07.127 21:54:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:07.127 21:54:17 -- common/autotest_common.sh@18 -- # set -x 00:08:07.127 21:54:17 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.127 21:54:17 -- nvmf/common.sh@7 -- # uname -s 00:08:07.127 21:54:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.127 21:54:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.127 21:54:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.127 21:54:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.127 21:54:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.127 21:54:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.127 21:54:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.127 21:54:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.127 21:54:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.127 21:54:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.127 21:54:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:07.127 21:54:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:07.127 21:54:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.127 21:54:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.127 21:54:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.127 21:54:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.127 21:54:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.127 21:54:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.127 21:54:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.127 21:54:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.127 21:54:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.127 21:54:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.127 21:54:17 -- paths/export.sh@5 -- # export PATH 00:08:07.127 21:54:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.127 21:54:17 -- nvmf/common.sh@46 -- # : 0 00:08:07.127 21:54:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.127 21:54:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.127 21:54:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.127 21:54:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.127 21:54:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.127 21:54:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.127 21:54:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.127 21:54:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.127 21:54:17 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:07.127 21:54:17 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:07.127 21:54:17 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:07.127 21:54:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:07.127 21:54:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.127 21:54:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.127 21:54:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.127 21:54:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.127 21:54:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.127 21:54:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.127 21:54:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.127 21:54:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:07.127 21:54:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:07.127 21:54:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:07.127 21:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.247 21:54:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.247 21:54:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:15.247 21:54:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:15.247 21:54:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:15.247 21:54:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:15.247 21:54:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:15.247 21:54:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:15.247 21:54:25 -- nvmf/common.sh@294 -- # net_devs=() 00:08:15.247 21:54:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:15.247 21:54:25 -- nvmf/common.sh@295 -- # e810=() 00:08:15.247 21:54:25 -- nvmf/common.sh@295 -- # local -ga e810 00:08:15.247 21:54:25 -- nvmf/common.sh@296 -- # x722=() 00:08:15.247 21:54:25 -- nvmf/common.sh@296 -- # local -ga x722 00:08:15.247 21:54:25 -- nvmf/common.sh@297 -- # mlx=() 00:08:15.247 21:54:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:15.247 21:54:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.247 21:54:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:15.247 21:54:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:15.247 21:54:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:15.247 21:54:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:15.247 21:54:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:15.247 21:54:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:15.247 21:54:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:15.247 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:15.247 21:54:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.247 21:54:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:15.247 21:54:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:15.247 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:15.247 21:54:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.247 21:54:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:15.247 21:54:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:15.247 21:54:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.247 21:54:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:15.247 21:54:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.247 21:54:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:15.247 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:15.247 21:54:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.247 21:54:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:15.247 21:54:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.247 21:54:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:15.247 21:54:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.247 21:54:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:15.247 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:15.247 21:54:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.247 21:54:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:15.247 21:54:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:15.247 21:54:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:15.247 21:54:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:15.247 21:54:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:15.247 21:54:25 -- nvmf/common.sh@57 -- # uname 00:08:15.247 21:54:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:15.247 21:54:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:15.247 21:54:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:15.247 21:54:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:15.247 21:54:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:15.247 21:54:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:15.247 21:54:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:15.247 21:54:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:15.247 21:54:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:15.248 21:54:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:15.248 21:54:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:15.248 21:54:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.248 21:54:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:15.248 21:54:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:15.248 21:54:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.248 21:54:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:15.248 21:54:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@104 -- # continue 2 00:08:15.248 21:54:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@104 -- # continue 2 00:08:15.248 21:54:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:15.248 21:54:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:15.248 21:54:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:15.248 21:54:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:15.248 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.248 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:15.248 altname enp217s0f0np0 00:08:15.248 altname ens818f0np0 00:08:15.248 inet 192.168.100.8/24 scope global mlx_0_0 00:08:15.248 valid_lft forever preferred_lft forever 00:08:15.248 21:54:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:15.248 21:54:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:15.248 21:54:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:15.248 21:54:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:15.248 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.248 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:15.248 altname enp217s0f1np1 00:08:15.248 altname ens818f1np1 00:08:15.248 inet 192.168.100.9/24 scope global mlx_0_1 00:08:15.248 valid_lft forever preferred_lft forever 00:08:15.248 21:54:25 -- nvmf/common.sh@410 -- # return 0 00:08:15.248 21:54:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:15.248 21:54:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:15.248 21:54:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:15.248 21:54:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:15.248 21:54:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.248 21:54:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:15.248 21:54:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:15.248 21:54:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.248 21:54:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:15.248 21:54:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@104 -- # continue 2 00:08:15.248 21:54:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.248 21:54:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.248 21:54:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@104 -- # continue 2 00:08:15.248 21:54:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:15.248 21:54:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:15.248 21:54:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:15.248 21:54:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:15.248 21:54:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:15.248 21:54:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:15.248 192.168.100.9' 00:08:15.248 21:54:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:15.248 192.168.100.9' 00:08:15.248 21:54:25 -- nvmf/common.sh@445 -- # head -n 1 00:08:15.248 21:54:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:15.248 21:54:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:15.248 192.168.100.9' 00:08:15.248 21:54:25 -- nvmf/common.sh@446 -- # tail -n +2 00:08:15.248 21:54:25 -- nvmf/common.sh@446 -- # head -n 1 00:08:15.248 21:54:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:15.248 21:54:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:15.248 21:54:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:15.248 21:54:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:15.248 21:54:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:15.248 21:54:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:15.248 21:54:25 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:15.248 21:54:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:15.248 21:54:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.248 21:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:15.248 ************************************ 00:08:15.248 START TEST nvmf_filesystem_no_in_capsule 00:08:15.248 ************************************ 00:08:15.248 21:54:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:15.248 21:54:25 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:15.248 21:54:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:15.248 21:54:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:15.248 21:54:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:15.248 21:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:15.248 21:54:25 -- nvmf/common.sh@469 -- # nvmfpid=2037536 00:08:15.248 21:54:25 -- nvmf/common.sh@470 -- # waitforlisten 2037536 00:08:15.248 21:54:25 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.248 21:54:25 -- common/autotest_common.sh@819 -- # '[' -z 2037536 ']' 00:08:15.248 21:54:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.248 21:54:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:15.248 21:54:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.248 21:54:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:15.248 21:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:15.248 [2024-07-26 21:54:25.603816] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:15.249 [2024-07-26 21:54:25.603878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.249 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.249 [2024-07-26 21:54:25.693876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.249 [2024-07-26 21:54:25.733157] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:15.249 [2024-07-26 21:54:25.733268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.249 [2024-07-26 21:54:25.733278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.249 [2024-07-26 21:54:25.733287] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.249 [2024-07-26 21:54:25.733334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.249 [2024-07-26 21:54:25.733421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.249 [2024-07-26 21:54:25.733506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.249 [2024-07-26 21:54:25.733508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.249 21:54:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:15.249 21:54:26 -- common/autotest_common.sh@852 -- # return 0 00:08:15.249 21:54:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:15.249 21:54:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.249 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.249 21:54:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.249 21:54:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:15.249 21:54:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:15.249 21:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.249 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.249 [2024-07-26 21:54:26.446971] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:15.249 [2024-07-26 21:54:26.468997] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6414b0/0x6459a0) succeed. 00:08:15.508 [2024-07-26 21:54:26.479213] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x642aa0/0x687030) succeed. 00:08:15.508 21:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.508 21:54:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:15.508 21:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.508 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.508 Malloc1 00:08:15.508 21:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.508 21:54:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.508 21:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.508 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.508 21:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.508 21:54:26 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:15.508 21:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.508 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.508 21:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.508 21:54:26 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:15.508 21:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.508 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.508 [2024-07-26 21:54:26.724771] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:15.508 21:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.508 21:54:26 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:15.508 21:54:26 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:15.508 21:54:26 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:15.508 21:54:26 -- common/autotest_common.sh@1359 -- # local bs 00:08:15.508 21:54:26 -- common/autotest_common.sh@1360 -- # local nb 00:08:15.767 21:54:26 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:15.767 21:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:15.767 21:54:26 -- common/autotest_common.sh@10 -- # set +x 00:08:15.767 21:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:15.767 21:54:26 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:15.767 { 00:08:15.767 "name": "Malloc1", 00:08:15.767 "aliases": [ 00:08:15.767 "42c39d2e-ae13-4225-8f69-b908900c9c7a" 00:08:15.767 ], 00:08:15.767 "product_name": "Malloc disk", 00:08:15.767 "block_size": 512, 00:08:15.767 "num_blocks": 1048576, 00:08:15.767 "uuid": "42c39d2e-ae13-4225-8f69-b908900c9c7a", 00:08:15.767 "assigned_rate_limits": { 00:08:15.767 "rw_ios_per_sec": 0, 00:08:15.767 "rw_mbytes_per_sec": 0, 00:08:15.767 "r_mbytes_per_sec": 0, 00:08:15.767 "w_mbytes_per_sec": 0 00:08:15.767 }, 00:08:15.767 "claimed": true, 00:08:15.767 "claim_type": "exclusive_write", 00:08:15.767 "zoned": false, 00:08:15.767 "supported_io_types": { 00:08:15.767 "read": true, 00:08:15.767 "write": true, 00:08:15.767 "unmap": true, 00:08:15.767 "write_zeroes": true, 00:08:15.767 "flush": true, 00:08:15.767 "reset": true, 00:08:15.767 "compare": false, 00:08:15.767 "compare_and_write": false, 00:08:15.767 "abort": true, 00:08:15.767 "nvme_admin": false, 00:08:15.767 "nvme_io": false 00:08:15.767 }, 00:08:15.767 "memory_domains": [ 00:08:15.767 { 00:08:15.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.767 "dma_device_type": 2 00:08:15.767 } 00:08:15.767 ], 00:08:15.767 "driver_specific": {} 00:08:15.767 } 00:08:15.767 ]' 00:08:15.767 21:54:26 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:15.767 21:54:26 -- common/autotest_common.sh@1362 -- # bs=512 00:08:15.767 21:54:26 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:15.767 21:54:26 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:15.767 21:54:26 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:15.767 21:54:26 -- common/autotest_common.sh@1367 -- # echo 512 00:08:15.767 21:54:26 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:15.767 21:54:26 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:16.702 21:54:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:16.702 21:54:27 -- common/autotest_common.sh@1177 -- # local i=0 00:08:16.702 21:54:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:16.702 21:54:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:16.702 21:54:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:19.235 21:54:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:19.235 21:54:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:19.235 21:54:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:19.235 21:54:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:19.235 21:54:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:19.235 21:54:29 -- common/autotest_common.sh@1187 -- # return 0 00:08:19.235 21:54:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:19.235 21:54:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:19.235 21:54:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:19.235 21:54:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:19.235 21:54:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:19.235 21:54:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:19.235 21:54:29 -- setup/common.sh@80 -- # echo 536870912 00:08:19.235 21:54:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:19.235 21:54:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:19.235 21:54:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:19.235 21:54:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:19.235 21:54:29 -- target/filesystem.sh@69 -- # partprobe 00:08:19.235 21:54:30 -- target/filesystem.sh@70 -- # sleep 1 00:08:20.171 21:54:31 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:20.171 21:54:31 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:20.171 21:54:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:20.171 21:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.172 21:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:20.172 ************************************ 00:08:20.172 START TEST filesystem_ext4 00:08:20.172 ************************************ 00:08:20.172 21:54:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:20.172 21:54:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:20.172 21:54:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.172 21:54:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:20.172 21:54:31 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:20.172 21:54:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:20.172 21:54:31 -- common/autotest_common.sh@904 -- # local i=0 00:08:20.172 21:54:31 -- common/autotest_common.sh@905 -- # local force 00:08:20.172 21:54:31 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:20.172 21:54:31 -- common/autotest_common.sh@908 -- # force=-F 00:08:20.172 21:54:31 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:20.172 mke2fs 1.46.5 (30-Dec-2021) 00:08:20.172 Discarding device blocks: 0/522240 done 00:08:20.172 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:20.172 Filesystem UUID: 8aff1486-1a6c-4c63-bf3d-f93eeb66e92f 00:08:20.172 Superblock backups stored on blocks: 00:08:20.172 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:20.172 00:08:20.172 Allocating group tables: 0/64 done 00:08:20.172 Writing inode tables: 0/64 done 00:08:20.172 Creating journal (8192 blocks): done 00:08:20.172 Writing superblocks and filesystem accounting information: 0/64 done 00:08:20.172 00:08:20.172 21:54:31 -- common/autotest_common.sh@921 -- # return 0 00:08:20.172 21:54:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.172 21:54:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.172 21:54:31 -- target/filesystem.sh@25 -- # sync 00:08:20.172 21:54:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.172 21:54:31 -- target/filesystem.sh@27 -- # sync 00:08:20.172 21:54:31 -- target/filesystem.sh@29 -- # i=0 00:08:20.172 21:54:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.172 21:54:31 -- target/filesystem.sh@37 -- # kill -0 2037536 00:08:20.172 21:54:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.172 21:54:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.431 21:54:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.431 21:54:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.431 00:08:20.431 real 0m0.193s 00:08:20.431 user 0m0.025s 00:08:20.431 sys 0m0.084s 00:08:20.431 21:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.431 21:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:20.431 ************************************ 00:08:20.431 END TEST filesystem_ext4 00:08:20.431 ************************************ 00:08:20.431 21:54:31 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:20.431 21:54:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:20.431 21:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.431 21:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:20.431 ************************************ 00:08:20.431 START TEST filesystem_btrfs 00:08:20.431 ************************************ 00:08:20.431 21:54:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:20.431 21:54:31 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:20.431 21:54:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.431 21:54:31 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:20.431 21:54:31 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:20.431 21:54:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:20.431 21:54:31 -- common/autotest_common.sh@904 -- # local i=0 00:08:20.431 21:54:31 -- common/autotest_common.sh@905 -- # local force 00:08:20.431 21:54:31 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:20.431 21:54:31 -- common/autotest_common.sh@910 -- # force=-f 00:08:20.431 21:54:31 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:20.431 btrfs-progs v6.6.2 00:08:20.431 See https://btrfs.readthedocs.io for more information. 00:08:20.431 00:08:20.431 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:20.431 NOTE: several default settings have changed in version 5.15, please make sure 00:08:20.431 this does not affect your deployments: 00:08:20.431 - DUP for metadata (-m dup) 00:08:20.431 - enabled no-holes (-O no-holes) 00:08:20.431 - enabled free-space-tree (-R free-space-tree) 00:08:20.431 00:08:20.431 Label: (null) 00:08:20.431 UUID: 90c55ab7-876e-4a83-afba-ed73d782e1b3 00:08:20.431 Node size: 16384 00:08:20.431 Sector size: 4096 00:08:20.431 Filesystem size: 510.00MiB 00:08:20.431 Block group profiles: 00:08:20.431 Data: single 8.00MiB 00:08:20.431 Metadata: DUP 32.00MiB 00:08:20.431 System: DUP 8.00MiB 00:08:20.431 SSD detected: yes 00:08:20.431 Zoned device: no 00:08:20.431 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:20.431 Runtime features: free-space-tree 00:08:20.431 Checksum: crc32c 00:08:20.431 Number of devices: 1 00:08:20.431 Devices: 00:08:20.431 ID SIZE PATH 00:08:20.431 1 510.00MiB /dev/nvme0n1p1 00:08:20.431 00:08:20.431 21:54:31 -- common/autotest_common.sh@921 -- # return 0 00:08:20.431 21:54:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.690 21:54:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.690 21:54:31 -- target/filesystem.sh@25 -- # sync 00:08:20.690 21:54:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.690 21:54:31 -- target/filesystem.sh@27 -- # sync 00:08:20.690 21:54:31 -- target/filesystem.sh@29 -- # i=0 00:08:20.690 21:54:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.690 21:54:31 -- target/filesystem.sh@37 -- # kill -0 2037536 00:08:20.690 21:54:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.690 21:54:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.690 21:54:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.690 21:54:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.690 00:08:20.690 real 0m0.260s 00:08:20.690 user 0m0.026s 00:08:20.690 sys 0m0.140s 00:08:20.690 21:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.690 21:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:20.690 ************************************ 00:08:20.690 END TEST filesystem_btrfs 00:08:20.690 ************************************ 00:08:20.690 21:54:31 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:20.690 21:54:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:20.690 21:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.690 21:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:20.690 ************************************ 00:08:20.690 START TEST filesystem_xfs 00:08:20.690 ************************************ 00:08:20.690 21:54:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:20.690 21:54:31 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:20.690 21:54:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:20.690 21:54:31 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:20.691 21:54:31 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:20.691 21:54:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:20.691 21:54:31 -- common/autotest_common.sh@904 -- # local i=0 00:08:20.691 21:54:31 -- common/autotest_common.sh@905 -- # local force 00:08:20.691 21:54:31 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:20.691 21:54:31 -- common/autotest_common.sh@910 -- # force=-f 00:08:20.691 21:54:31 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:20.691 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:20.691 = sectsz=512 attr=2, projid32bit=1 00:08:20.691 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:20.691 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:20.691 data = bsize=4096 blocks=130560, imaxpct=25 00:08:20.691 = sunit=0 swidth=0 blks 00:08:20.691 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:20.691 log =internal log bsize=4096 blocks=16384, version=2 00:08:20.691 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:20.691 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:20.691 Discarding blocks...Done. 00:08:20.691 21:54:31 -- common/autotest_common.sh@921 -- # return 0 00:08:20.691 21:54:31 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:20.949 21:54:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:20.949 21:54:31 -- target/filesystem.sh@25 -- # sync 00:08:20.949 21:54:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:20.949 21:54:31 -- target/filesystem.sh@27 -- # sync 00:08:20.949 21:54:31 -- target/filesystem.sh@29 -- # i=0 00:08:20.949 21:54:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:20.949 21:54:31 -- target/filesystem.sh@37 -- # kill -0 2037536 00:08:20.949 21:54:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:20.949 21:54:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:20.949 21:54:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:20.949 21:54:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:20.949 00:08:20.949 real 0m0.207s 00:08:20.949 user 0m0.025s 00:08:20.949 sys 0m0.086s 00:08:20.949 21:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.949 21:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:20.949 ************************************ 00:08:20.949 END TEST filesystem_xfs 00:08:20.949 ************************************ 00:08:20.949 21:54:32 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:20.949 21:54:32 -- target/filesystem.sh@93 -- # sync 00:08:20.949 21:54:32 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.883 21:54:33 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.883 21:54:33 -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.883 21:54:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:21.883 21:54:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.883 21:54:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:21.883 21:54:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.883 21:54:33 -- common/autotest_common.sh@1210 -- # return 0 00:08:21.883 21:54:33 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.883 21:54:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.883 21:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:21.883 21:54:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.883 21:54:33 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.883 21:54:33 -- target/filesystem.sh@101 -- # killprocess 2037536 00:08:21.883 21:54:33 -- common/autotest_common.sh@926 -- # '[' -z 2037536 ']' 00:08:21.883 21:54:33 -- common/autotest_common.sh@930 -- # kill -0 2037536 00:08:21.883 21:54:33 -- common/autotest_common.sh@931 -- # uname 00:08:21.883 21:54:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:21.883 21:54:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2037536 00:08:21.883 21:54:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:21.883 21:54:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:21.883 21:54:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2037536' 00:08:21.883 killing process with pid 2037536 00:08:22.142 21:54:33 -- common/autotest_common.sh@945 -- # kill 2037536 00:08:22.142 21:54:33 -- common/autotest_common.sh@950 -- # wait 2037536 00:08:22.401 21:54:33 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.401 00:08:22.401 real 0m7.941s 00:08:22.401 user 0m30.930s 00:08:22.401 sys 0m1.264s 00:08:22.401 21:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.401 21:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 ************************************ 00:08:22.401 END TEST nvmf_filesystem_no_in_capsule 00:08:22.401 ************************************ 00:08:22.401 21:54:33 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:22.401 21:54:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:22.401 21:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.401 21:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 ************************************ 00:08:22.401 START TEST nvmf_filesystem_in_capsule 00:08:22.401 ************************************ 00:08:22.401 21:54:33 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:22.401 21:54:33 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:22.401 21:54:33 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:22.401 21:54:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:22.401 21:54:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:22.401 21:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 21:54:33 -- nvmf/common.sh@469 -- # nvmfpid=2039194 00:08:22.401 21:54:33 -- nvmf/common.sh@470 -- # waitforlisten 2039194 00:08:22.401 21:54:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:22.401 21:54:33 -- common/autotest_common.sh@819 -- # '[' -z 2039194 ']' 00:08:22.401 21:54:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.401 21:54:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:22.401 21:54:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.401 21:54:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:22.401 21:54:33 -- common/autotest_common.sh@10 -- # set +x 00:08:22.401 [2024-07-26 21:54:33.596454] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:22.401 [2024-07-26 21:54:33.596511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.660 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.660 [2024-07-26 21:54:33.683407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:22.660 [2024-07-26 21:54:33.720272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:22.660 [2024-07-26 21:54:33.720387] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.660 [2024-07-26 21:54:33.720397] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.660 [2024-07-26 21:54:33.720407] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.660 [2024-07-26 21:54:33.720453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.660 [2024-07-26 21:54:33.720551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.660 [2024-07-26 21:54:33.720644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.660 [2024-07-26 21:54:33.720659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.256 21:54:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:23.256 21:54:34 -- common/autotest_common.sh@852 -- # return 0 00:08:23.256 21:54:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:23.256 21:54:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:23.256 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 21:54:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:23.256 21:54:34 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:23.256 21:54:34 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:23.256 21:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.256 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 [2024-07-26 21:54:34.456280] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12ee4b0/0x12f29a0) succeed. 00:08:23.256 [2024-07-26 21:54:34.466611] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12efaa0/0x1334030) succeed. 00:08:23.515 21:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.515 21:54:34 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:23.515 21:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.515 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.515 Malloc1 00:08:23.515 21:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.515 21:54:34 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.515 21:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.515 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.515 21:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.515 21:54:34 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.515 21:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.515 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.515 21:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.515 21:54:34 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:23.515 21:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.515 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.515 [2024-07-26 21:54:34.730106] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:23.515 21:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.515 21:54:34 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:23.515 21:54:34 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:23.515 21:54:34 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:23.515 21:54:34 -- common/autotest_common.sh@1359 -- # local bs 00:08:23.515 21:54:34 -- common/autotest_common.sh@1360 -- # local nb 00:08:23.515 21:54:34 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:23.515 21:54:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:23.515 21:54:34 -- common/autotest_common.sh@10 -- # set +x 00:08:23.773 21:54:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:23.773 21:54:34 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:23.773 { 00:08:23.773 "name": "Malloc1", 00:08:23.773 "aliases": [ 00:08:23.773 "021a8918-43b5-48bc-9811-98fb1630e234" 00:08:23.773 ], 00:08:23.773 "product_name": "Malloc disk", 00:08:23.773 "block_size": 512, 00:08:23.773 "num_blocks": 1048576, 00:08:23.773 "uuid": "021a8918-43b5-48bc-9811-98fb1630e234", 00:08:23.773 "assigned_rate_limits": { 00:08:23.773 "rw_ios_per_sec": 0, 00:08:23.773 "rw_mbytes_per_sec": 0, 00:08:23.773 "r_mbytes_per_sec": 0, 00:08:23.773 "w_mbytes_per_sec": 0 00:08:23.773 }, 00:08:23.773 "claimed": true, 00:08:23.773 "claim_type": "exclusive_write", 00:08:23.773 "zoned": false, 00:08:23.773 "supported_io_types": { 00:08:23.773 "read": true, 00:08:23.773 "write": true, 00:08:23.773 "unmap": true, 00:08:23.773 "write_zeroes": true, 00:08:23.773 "flush": true, 00:08:23.773 "reset": true, 00:08:23.773 "compare": false, 00:08:23.773 "compare_and_write": false, 00:08:23.773 "abort": true, 00:08:23.773 "nvme_admin": false, 00:08:23.773 "nvme_io": false 00:08:23.773 }, 00:08:23.773 "memory_domains": [ 00:08:23.773 { 00:08:23.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.773 "dma_device_type": 2 00:08:23.773 } 00:08:23.773 ], 00:08:23.773 "driver_specific": {} 00:08:23.773 } 00:08:23.773 ]' 00:08:23.773 21:54:34 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:23.773 21:54:34 -- common/autotest_common.sh@1362 -- # bs=512 00:08:23.773 21:54:34 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:23.773 21:54:34 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:23.773 21:54:34 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:23.773 21:54:34 -- common/autotest_common.sh@1367 -- # echo 512 00:08:23.773 21:54:34 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:23.773 21:54:34 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:24.709 21:54:35 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:24.709 21:54:35 -- common/autotest_common.sh@1177 -- # local i=0 00:08:24.709 21:54:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:24.709 21:54:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:24.709 21:54:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:26.613 21:54:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:26.613 21:54:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:26.613 21:54:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:26.872 21:54:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:26.872 21:54:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:26.872 21:54:37 -- common/autotest_common.sh@1187 -- # return 0 00:08:26.872 21:54:37 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:26.872 21:54:37 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:26.872 21:54:37 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:26.872 21:54:37 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:26.872 21:54:37 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:26.872 21:54:37 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:26.872 21:54:37 -- setup/common.sh@80 -- # echo 536870912 00:08:26.872 21:54:37 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:26.872 21:54:37 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:26.872 21:54:37 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:26.872 21:54:37 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:26.872 21:54:37 -- target/filesystem.sh@69 -- # partprobe 00:08:27.131 21:54:38 -- target/filesystem.sh@70 -- # sleep 1 00:08:28.068 21:54:39 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:28.068 21:54:39 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:28.068 21:54:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.068 21:54:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.068 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:28.068 ************************************ 00:08:28.068 START TEST filesystem_in_capsule_ext4 00:08:28.068 ************************************ 00:08:28.068 21:54:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:28.068 21:54:39 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:28.068 21:54:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.068 21:54:39 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:28.068 21:54:39 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:28.068 21:54:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.068 21:54:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.068 21:54:39 -- common/autotest_common.sh@905 -- # local force 00:08:28.068 21:54:39 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:28.068 21:54:39 -- common/autotest_common.sh@908 -- # force=-F 00:08:28.068 21:54:39 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:28.068 mke2fs 1.46.5 (30-Dec-2021) 00:08:28.068 Discarding device blocks: 0/522240 done 00:08:28.068 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:28.068 Filesystem UUID: 1a4fdd42-e14b-4966-97d2-94ed349d64e4 00:08:28.068 Superblock backups stored on blocks: 00:08:28.068 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:28.068 00:08:28.068 Allocating group tables: 0/64 done 00:08:28.068 Writing inode tables: 0/64 done 00:08:28.068 Creating journal (8192 blocks): done 00:08:28.068 Writing superblocks and filesystem accounting information: 0/64 done 00:08:28.068 00:08:28.068 21:54:39 -- common/autotest_common.sh@921 -- # return 0 00:08:28.068 21:54:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.327 21:54:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.327 21:54:39 -- target/filesystem.sh@25 -- # sync 00:08:28.327 21:54:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.327 21:54:39 -- target/filesystem.sh@27 -- # sync 00:08:28.327 21:54:39 -- target/filesystem.sh@29 -- # i=0 00:08:28.327 21:54:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.327 21:54:39 -- target/filesystem.sh@37 -- # kill -0 2039194 00:08:28.327 21:54:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.327 21:54:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.327 21:54:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.327 21:54:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.327 00:08:28.327 real 0m0.182s 00:08:28.327 user 0m0.025s 00:08:28.327 sys 0m0.071s 00:08:28.327 21:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.327 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:28.327 ************************************ 00:08:28.327 END TEST filesystem_in_capsule_ext4 00:08:28.327 ************************************ 00:08:28.327 21:54:39 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:28.327 21:54:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.327 21:54:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.327 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:28.327 ************************************ 00:08:28.327 START TEST filesystem_in_capsule_btrfs 00:08:28.327 ************************************ 00:08:28.327 21:54:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:28.327 21:54:39 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:28.327 21:54:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.327 21:54:39 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:28.327 21:54:39 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:28.327 21:54:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.327 21:54:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.327 21:54:39 -- common/autotest_common.sh@905 -- # local force 00:08:28.327 21:54:39 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:28.327 21:54:39 -- common/autotest_common.sh@910 -- # force=-f 00:08:28.327 21:54:39 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.327 btrfs-progs v6.6.2 00:08:28.327 See https://btrfs.readthedocs.io for more information. 00:08:28.327 00:08:28.328 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.328 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.328 this does not affect your deployments: 00:08:28.328 - DUP for metadata (-m dup) 00:08:28.328 - enabled no-holes (-O no-holes) 00:08:28.328 - enabled free-space-tree (-R free-space-tree) 00:08:28.328 00:08:28.328 Label: (null) 00:08:28.328 UUID: 6f697f03-0b84-4447-a6d9-3f48f76f9244 00:08:28.328 Node size: 16384 00:08:28.328 Sector size: 4096 00:08:28.328 Filesystem size: 510.00MiB 00:08:28.328 Block group profiles: 00:08:28.328 Data: single 8.00MiB 00:08:28.328 Metadata: DUP 32.00MiB 00:08:28.328 System: DUP 8.00MiB 00:08:28.328 SSD detected: yes 00:08:28.328 Zoned device: no 00:08:28.328 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.328 Runtime features: free-space-tree 00:08:28.328 Checksum: crc32c 00:08:28.328 Number of devices: 1 00:08:28.328 Devices: 00:08:28.328 ID SIZE PATH 00:08:28.328 1 510.00MiB /dev/nvme0n1p1 00:08:28.328 00:08:28.328 21:54:39 -- common/autotest_common.sh@921 -- # return 0 00:08:28.328 21:54:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.587 21:54:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.587 21:54:39 -- target/filesystem.sh@25 -- # sync 00:08:28.587 21:54:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.587 21:54:39 -- target/filesystem.sh@27 -- # sync 00:08:28.587 21:54:39 -- target/filesystem.sh@29 -- # i=0 00:08:28.587 21:54:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.587 21:54:39 -- target/filesystem.sh@37 -- # kill -0 2039194 00:08:28.587 21:54:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.587 21:54:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.587 21:54:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.587 21:54:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.587 00:08:28.587 real 0m0.262s 00:08:28.587 user 0m0.029s 00:08:28.587 sys 0m0.143s 00:08:28.587 21:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.587 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:28.587 ************************************ 00:08:28.587 END TEST filesystem_in_capsule_btrfs 00:08:28.587 ************************************ 00:08:28.587 21:54:39 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:28.587 21:54:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.587 21:54:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.587 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:28.587 ************************************ 00:08:28.587 START TEST filesystem_in_capsule_xfs 00:08:28.587 ************************************ 00:08:28.587 21:54:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:28.587 21:54:39 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:28.587 21:54:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.587 21:54:39 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:28.587 21:54:39 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:28.587 21:54:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.587 21:54:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.587 21:54:39 -- common/autotest_common.sh@905 -- # local force 00:08:28.587 21:54:39 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:28.587 21:54:39 -- common/autotest_common.sh@910 -- # force=-f 00:08:28.587 21:54:39 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:28.846 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:28.846 = sectsz=512 attr=2, projid32bit=1 00:08:28.846 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:28.846 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:28.846 data = bsize=4096 blocks=130560, imaxpct=25 00:08:28.846 = sunit=0 swidth=0 blks 00:08:28.846 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:28.846 log =internal log bsize=4096 blocks=16384, version=2 00:08:28.846 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:28.846 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:28.846 Discarding blocks...Done. 00:08:28.846 21:54:39 -- common/autotest_common.sh@921 -- # return 0 00:08:28.846 21:54:39 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.846 21:54:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.846 21:54:39 -- target/filesystem.sh@25 -- # sync 00:08:28.846 21:54:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.846 21:54:39 -- target/filesystem.sh@27 -- # sync 00:08:28.846 21:54:39 -- target/filesystem.sh@29 -- # i=0 00:08:28.846 21:54:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.846 21:54:39 -- target/filesystem.sh@37 -- # kill -0 2039194 00:08:28.846 21:54:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.846 21:54:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.846 21:54:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.846 21:54:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.846 00:08:28.846 real 0m0.207s 00:08:28.846 user 0m0.030s 00:08:28.846 sys 0m0.081s 00:08:28.846 21:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.846 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:28.846 ************************************ 00:08:28.846 END TEST filesystem_in_capsule_xfs 00:08:28.846 ************************************ 00:08:28.846 21:54:39 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.846 21:54:40 -- target/filesystem.sh@93 -- # sync 00:08:28.846 21:54:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.782 21:54:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.782 21:54:40 -- common/autotest_common.sh@1198 -- # local i=0 00:08:29.782 21:54:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:29.782 21:54:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.782 21:54:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:29.782 21:54:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.042 21:54:41 -- common/autotest_common.sh@1210 -- # return 0 00:08:30.042 21:54:41 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.042 21:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.042 21:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:30.042 21:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.042 21:54:41 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:30.042 21:54:41 -- target/filesystem.sh@101 -- # killprocess 2039194 00:08:30.042 21:54:41 -- common/autotest_common.sh@926 -- # '[' -z 2039194 ']' 00:08:30.042 21:54:41 -- common/autotest_common.sh@930 -- # kill -0 2039194 00:08:30.042 21:54:41 -- common/autotest_common.sh@931 -- # uname 00:08:30.042 21:54:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:30.042 21:54:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2039194 00:08:30.042 21:54:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:30.042 21:54:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:30.042 21:54:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2039194' 00:08:30.042 killing process with pid 2039194 00:08:30.042 21:54:41 -- common/autotest_common.sh@945 -- # kill 2039194 00:08:30.042 21:54:41 -- common/autotest_common.sh@950 -- # wait 2039194 00:08:30.302 21:54:41 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:30.302 00:08:30.302 real 0m7.950s 00:08:30.302 user 0m30.968s 00:08:30.302 sys 0m1.235s 00:08:30.302 21:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.302 21:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:30.302 ************************************ 00:08:30.302 END TEST nvmf_filesystem_in_capsule 00:08:30.302 ************************************ 00:08:30.561 21:54:41 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:30.561 21:54:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:30.561 21:54:41 -- nvmf/common.sh@116 -- # sync 00:08:30.561 21:54:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:30.561 21:54:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:30.561 21:54:41 -- nvmf/common.sh@119 -- # set +e 00:08:30.561 21:54:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:30.561 21:54:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:30.561 rmmod nvme_rdma 00:08:30.561 rmmod nvme_fabrics 00:08:30.561 21:54:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:30.561 21:54:41 -- nvmf/common.sh@123 -- # set -e 00:08:30.561 21:54:41 -- nvmf/common.sh@124 -- # return 0 00:08:30.561 21:54:41 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:30.561 21:54:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:30.561 21:54:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:30.561 00:08:30.561 real 0m24.032s 00:08:30.561 user 1m4.130s 00:08:30.561 sys 0m8.518s 00:08:30.561 21:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.561 21:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:30.561 ************************************ 00:08:30.561 END TEST nvmf_filesystem 00:08:30.561 ************************************ 00:08:30.561 21:54:41 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:30.561 21:54:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:30.562 21:54:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.562 21:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:30.562 ************************************ 00:08:30.562 START TEST nvmf_discovery 00:08:30.562 ************************************ 00:08:30.562 21:54:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:30.562 * Looking for test storage... 00:08:30.562 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:30.562 21:54:41 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.562 21:54:41 -- nvmf/common.sh@7 -- # uname -s 00:08:30.562 21:54:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.562 21:54:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.562 21:54:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.562 21:54:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.562 21:54:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.562 21:54:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.562 21:54:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.562 21:54:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.562 21:54:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.562 21:54:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.562 21:54:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:30.562 21:54:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:30.562 21:54:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.562 21:54:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.562 21:54:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.562 21:54:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:30.562 21:54:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.562 21:54:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.562 21:54:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.562 21:54:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.562 21:54:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.562 21:54:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.562 21:54:41 -- paths/export.sh@5 -- # export PATH 00:08:30.562 21:54:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.562 21:54:41 -- nvmf/common.sh@46 -- # : 0 00:08:30.562 21:54:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:30.562 21:54:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:30.562 21:54:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:30.562 21:54:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.562 21:54:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.562 21:54:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:30.562 21:54:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:30.562 21:54:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:30.562 21:54:41 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:30.562 21:54:41 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:30.562 21:54:41 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:30.562 21:54:41 -- target/discovery.sh@15 -- # hash nvme 00:08:30.562 21:54:41 -- target/discovery.sh@20 -- # nvmftestinit 00:08:30.562 21:54:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:30.562 21:54:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.562 21:54:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:30.562 21:54:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:30.562 21:54:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:30.562 21:54:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.562 21:54:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.562 21:54:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.562 21:54:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:30.562 21:54:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:30.562 21:54:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:30.562 21:54:41 -- common/autotest_common.sh@10 -- # set +x 00:08:38.685 21:54:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:38.685 21:54:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:38.685 21:54:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:38.685 21:54:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:38.685 21:54:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:38.685 21:54:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:38.685 21:54:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:38.685 21:54:49 -- nvmf/common.sh@294 -- # net_devs=() 00:08:38.685 21:54:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:38.685 21:54:49 -- nvmf/common.sh@295 -- # e810=() 00:08:38.685 21:54:49 -- nvmf/common.sh@295 -- # local -ga e810 00:08:38.685 21:54:49 -- nvmf/common.sh@296 -- # x722=() 00:08:38.685 21:54:49 -- nvmf/common.sh@296 -- # local -ga x722 00:08:38.685 21:54:49 -- nvmf/common.sh@297 -- # mlx=() 00:08:38.685 21:54:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:38.685 21:54:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.685 21:54:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:38.685 21:54:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:38.685 21:54:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:38.686 21:54:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:38.686 21:54:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:38.686 21:54:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:38.686 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:38.686 21:54:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.686 21:54:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:38.686 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:38.686 21:54:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.686 21:54:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.686 21:54:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.686 21:54:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:38.686 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.686 21:54:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.686 21:54:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.686 21:54:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:38.686 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.686 21:54:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:38.686 21:54:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:38.686 21:54:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:38.686 21:54:49 -- nvmf/common.sh@57 -- # uname 00:08:38.686 21:54:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:38.686 21:54:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:38.686 21:54:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:38.686 21:54:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:38.686 21:54:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:38.686 21:54:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:38.686 21:54:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:38.686 21:54:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:38.686 21:54:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:38.686 21:54:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:38.686 21:54:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:38.686 21:54:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.686 21:54:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.686 21:54:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.686 21:54:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.686 21:54:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@104 -- # continue 2 00:08:38.686 21:54:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@104 -- # continue 2 00:08:38.686 21:54:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.686 21:54:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.686 21:54:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:38.686 21:54:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:38.686 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.686 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:38.686 altname enp217s0f0np0 00:08:38.686 altname ens818f0np0 00:08:38.686 inet 192.168.100.8/24 scope global mlx_0_0 00:08:38.686 valid_lft forever preferred_lft forever 00:08:38.686 21:54:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:38.686 21:54:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.686 21:54:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:38.686 21:54:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:38.686 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:38.686 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:38.686 altname enp217s0f1np1 00:08:38.686 altname ens818f1np1 00:08:38.686 inet 192.168.100.9/24 scope global mlx_0_1 00:08:38.686 valid_lft forever preferred_lft forever 00:08:38.686 21:54:49 -- nvmf/common.sh@410 -- # return 0 00:08:38.686 21:54:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:38.686 21:54:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:38.686 21:54:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:38.686 21:54:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:38.686 21:54:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:38.686 21:54:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:38.686 21:54:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:38.686 21:54:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:38.686 21:54:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:38.686 21:54:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@104 -- # continue 2 00:08:38.686 21:54:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:38.686 21:54:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:38.686 21:54:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@104 -- # continue 2 00:08:38.686 21:54:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.686 21:54:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.686 21:54:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:38.686 21:54:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:38.686 21:54:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:38.686 21:54:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:38.686 192.168.100.9' 00:08:38.686 21:54:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:38.686 192.168.100.9' 00:08:38.686 21:54:49 -- nvmf/common.sh@445 -- # head -n 1 00:08:38.686 21:54:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:38.686 21:54:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:38.686 192.168.100.9' 00:08:38.686 21:54:49 -- nvmf/common.sh@446 -- # tail -n +2 00:08:38.686 21:54:49 -- nvmf/common.sh@446 -- # head -n 1 00:08:38.686 21:54:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:38.686 21:54:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:38.686 21:54:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:38.686 21:54:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:38.686 21:54:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:38.686 21:54:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:38.687 21:54:49 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:38.687 21:54:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:38.687 21:54:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:38.687 21:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.687 21:54:49 -- nvmf/common.sh@469 -- # nvmfpid=2044776 00:08:38.687 21:54:49 -- nvmf/common.sh@470 -- # waitforlisten 2044776 00:08:38.687 21:54:49 -- common/autotest_common.sh@819 -- # '[' -z 2044776 ']' 00:08:38.687 21:54:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.687 21:54:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:38.687 21:54:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.687 21:54:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:38.687 21:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:38.687 21:54:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.687 [2024-07-26 21:54:49.451674] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:38.687 [2024-07-26 21:54:49.451723] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.687 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.687 [2024-07-26 21:54:49.538188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.687 [2024-07-26 21:54:49.576411] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:38.687 [2024-07-26 21:54:49.576514] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.687 [2024-07-26 21:54:49.576525] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.687 [2024-07-26 21:54:49.576537] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.687 [2024-07-26 21:54:49.576590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.687 [2024-07-26 21:54:49.576609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.687 [2024-07-26 21:54:49.576701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.687 [2024-07-26 21:54:49.576703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.254 21:54:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:39.254 21:54:50 -- common/autotest_common.sh@852 -- # return 0 00:08:39.254 21:54:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:39.254 21:54:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:39.254 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.254 21:54:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.254 21:54:50 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:39.254 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.254 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.254 [2024-07-26 21:54:50.317821] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc444b0/0xc489a0) succeed. 00:08:39.254 [2024-07-26 21:54:50.328177] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc45aa0/0xc8a030) succeed. 00:08:39.254 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.254 21:54:50 -- target/discovery.sh@26 -- # seq 1 4 00:08:39.254 21:54:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.254 21:54:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:39.254 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.254 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.254 Null1 00:08:39.254 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.254 21:54:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:39.254 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.254 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.254 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.254 21:54:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:39.254 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.254 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 [2024-07-26 21:54:50.492535] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.514 21:54:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 Null2 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.514 21:54:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 Null3 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:39.514 21:54:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 Null4 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.514 21:54:50 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:08:39.514 00:08:39.514 Discovery Log Number of Records 6, Generation counter 6 00:08:39.514 =====Discovery Log Entry 0====== 00:08:39.514 trtype: rdma 00:08:39.514 adrfam: ipv4 00:08:39.514 subtype: current discovery subsystem 00:08:39.514 treq: not required 00:08:39.514 portid: 0 00:08:39.514 trsvcid: 4420 00:08:39.514 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:39.514 traddr: 192.168.100.8 00:08:39.514 eflags: explicit discovery connections, duplicate discovery information 00:08:39.514 rdma_prtype: not specified 00:08:39.514 rdma_qptype: connected 00:08:39.514 rdma_cms: rdma-cm 00:08:39.514 rdma_pkey: 0x0000 00:08:39.514 =====Discovery Log Entry 1====== 00:08:39.514 trtype: rdma 00:08:39.514 adrfam: ipv4 00:08:39.514 subtype: nvme subsystem 00:08:39.514 treq: not required 00:08:39.514 portid: 0 00:08:39.514 trsvcid: 4420 00:08:39.514 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:39.514 traddr: 192.168.100.8 00:08:39.514 eflags: none 00:08:39.514 rdma_prtype: not specified 00:08:39.514 rdma_qptype: connected 00:08:39.514 rdma_cms: rdma-cm 00:08:39.514 rdma_pkey: 0x0000 00:08:39.514 =====Discovery Log Entry 2====== 00:08:39.514 trtype: rdma 00:08:39.514 adrfam: ipv4 00:08:39.514 subtype: nvme subsystem 00:08:39.514 treq: not required 00:08:39.514 portid: 0 00:08:39.514 trsvcid: 4420 00:08:39.514 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:39.514 traddr: 192.168.100.8 00:08:39.514 eflags: none 00:08:39.514 rdma_prtype: not specified 00:08:39.514 rdma_qptype: connected 00:08:39.514 rdma_cms: rdma-cm 00:08:39.514 rdma_pkey: 0x0000 00:08:39.514 =====Discovery Log Entry 3====== 00:08:39.514 trtype: rdma 00:08:39.514 adrfam: ipv4 00:08:39.514 subtype: nvme subsystem 00:08:39.514 treq: not required 00:08:39.514 portid: 0 00:08:39.514 trsvcid: 4420 00:08:39.514 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:39.514 traddr: 192.168.100.8 00:08:39.514 eflags: none 00:08:39.514 rdma_prtype: not specified 00:08:39.514 rdma_qptype: connected 00:08:39.514 rdma_cms: rdma-cm 00:08:39.514 rdma_pkey: 0x0000 00:08:39.514 =====Discovery Log Entry 4====== 00:08:39.514 trtype: rdma 00:08:39.514 adrfam: ipv4 00:08:39.514 subtype: nvme subsystem 00:08:39.514 treq: not required 00:08:39.514 portid: 0 00:08:39.514 trsvcid: 4420 00:08:39.514 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:39.514 traddr: 192.168.100.8 00:08:39.514 eflags: none 00:08:39.514 rdma_prtype: not specified 00:08:39.514 rdma_qptype: connected 00:08:39.514 rdma_cms: rdma-cm 00:08:39.514 rdma_pkey: 0x0000 00:08:39.514 =====Discovery Log Entry 5====== 00:08:39.514 trtype: rdma 00:08:39.514 adrfam: ipv4 00:08:39.514 subtype: discovery subsystem referral 00:08:39.514 treq: not required 00:08:39.514 portid: 0 00:08:39.514 trsvcid: 4430 00:08:39.514 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:39.514 traddr: 192.168.100.8 00:08:39.514 eflags: none 00:08:39.514 rdma_prtype: unrecognized 00:08:39.514 rdma_qptype: unrecognized 00:08:39.514 rdma_cms: unrecognized 00:08:39.514 rdma_pkey: 0x0000 00:08:39.514 21:54:50 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:39.514 Perform nvmf subsystem discovery via RPC 00:08:39.514 21:54:50 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:39.514 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.514 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.514 [2024-07-26 21:54:50.721051] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:39.514 [ 00:08:39.514 { 00:08:39.514 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:39.515 "subtype": "Discovery", 00:08:39.515 "listen_addresses": [ 00:08:39.515 { 00:08:39.515 "transport": "RDMA", 00:08:39.515 "trtype": "RDMA", 00:08:39.515 "adrfam": "IPv4", 00:08:39.515 "traddr": "192.168.100.8", 00:08:39.515 "trsvcid": "4420" 00:08:39.515 } 00:08:39.515 ], 00:08:39.515 "allow_any_host": true, 00:08:39.515 "hosts": [] 00:08:39.515 }, 00:08:39.515 { 00:08:39.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:39.515 "subtype": "NVMe", 00:08:39.515 "listen_addresses": [ 00:08:39.515 { 00:08:39.515 "transport": "RDMA", 00:08:39.515 "trtype": "RDMA", 00:08:39.515 "adrfam": "IPv4", 00:08:39.515 "traddr": "192.168.100.8", 00:08:39.515 "trsvcid": "4420" 00:08:39.515 } 00:08:39.515 ], 00:08:39.515 "allow_any_host": true, 00:08:39.515 "hosts": [], 00:08:39.515 "serial_number": "SPDK00000000000001", 00:08:39.515 "model_number": "SPDK bdev Controller", 00:08:39.515 "max_namespaces": 32, 00:08:39.515 "min_cntlid": 1, 00:08:39.515 "max_cntlid": 65519, 00:08:39.515 "namespaces": [ 00:08:39.515 { 00:08:39.515 "nsid": 1, 00:08:39.515 "bdev_name": "Null1", 00:08:39.515 "name": "Null1", 00:08:39.515 "nguid": "D00F15276BAB4213981EA1539BD8DCC7", 00:08:39.515 "uuid": "d00f1527-6bab-4213-981e-a1539bd8dcc7" 00:08:39.515 } 00:08:39.515 ] 00:08:39.515 }, 00:08:39.515 { 00:08:39.515 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:39.515 "subtype": "NVMe", 00:08:39.515 "listen_addresses": [ 00:08:39.515 { 00:08:39.515 "transport": "RDMA", 00:08:39.515 "trtype": "RDMA", 00:08:39.515 "adrfam": "IPv4", 00:08:39.515 "traddr": "192.168.100.8", 00:08:39.515 "trsvcid": "4420" 00:08:39.515 } 00:08:39.515 ], 00:08:39.515 "allow_any_host": true, 00:08:39.515 "hosts": [], 00:08:39.515 "serial_number": "SPDK00000000000002", 00:08:39.515 "model_number": "SPDK bdev Controller", 00:08:39.515 "max_namespaces": 32, 00:08:39.774 "min_cntlid": 1, 00:08:39.774 "max_cntlid": 65519, 00:08:39.774 "namespaces": [ 00:08:39.774 { 00:08:39.774 "nsid": 1, 00:08:39.774 "bdev_name": "Null2", 00:08:39.774 "name": "Null2", 00:08:39.774 "nguid": "4BC56A0F5CB848AB89D4621E26B44D52", 00:08:39.774 "uuid": "4bc56a0f-5cb8-48ab-89d4-621e26b44d52" 00:08:39.774 } 00:08:39.774 ] 00:08:39.774 }, 00:08:39.774 { 00:08:39.774 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:39.774 "subtype": "NVMe", 00:08:39.774 "listen_addresses": [ 00:08:39.774 { 00:08:39.774 "transport": "RDMA", 00:08:39.774 "trtype": "RDMA", 00:08:39.774 "adrfam": "IPv4", 00:08:39.774 "traddr": "192.168.100.8", 00:08:39.774 "trsvcid": "4420" 00:08:39.774 } 00:08:39.774 ], 00:08:39.774 "allow_any_host": true, 00:08:39.774 "hosts": [], 00:08:39.774 "serial_number": "SPDK00000000000003", 00:08:39.774 "model_number": "SPDK bdev Controller", 00:08:39.775 "max_namespaces": 32, 00:08:39.775 "min_cntlid": 1, 00:08:39.775 "max_cntlid": 65519, 00:08:39.775 "namespaces": [ 00:08:39.775 { 00:08:39.775 "nsid": 1, 00:08:39.775 "bdev_name": "Null3", 00:08:39.775 "name": "Null3", 00:08:39.775 "nguid": "87FD7D2B44D64AAAB488857E6A636123", 00:08:39.775 "uuid": "87fd7d2b-44d6-4aaa-b488-857e6a636123" 00:08:39.775 } 00:08:39.775 ] 00:08:39.775 }, 00:08:39.775 { 00:08:39.775 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:39.775 "subtype": "NVMe", 00:08:39.775 "listen_addresses": [ 00:08:39.775 { 00:08:39.775 "transport": "RDMA", 00:08:39.775 "trtype": "RDMA", 00:08:39.775 "adrfam": "IPv4", 00:08:39.775 "traddr": "192.168.100.8", 00:08:39.775 "trsvcid": "4420" 00:08:39.775 } 00:08:39.775 ], 00:08:39.775 "allow_any_host": true, 00:08:39.775 "hosts": [], 00:08:39.775 "serial_number": "SPDK00000000000004", 00:08:39.775 "model_number": "SPDK bdev Controller", 00:08:39.775 "max_namespaces": 32, 00:08:39.775 "min_cntlid": 1, 00:08:39.775 "max_cntlid": 65519, 00:08:39.775 "namespaces": [ 00:08:39.775 { 00:08:39.775 "nsid": 1, 00:08:39.775 "bdev_name": "Null4", 00:08:39.775 "name": "Null4", 00:08:39.775 "nguid": "223F0AE01CF04F7EA98D12427EB2AF86", 00:08:39.775 "uuid": "223f0ae0-1cf0-4f7e-a98d-12427eb2af86" 00:08:39.775 } 00:08:39.775 ] 00:08:39.775 } 00:08:39.775 ] 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@42 -- # seq 1 4 00:08:39.775 21:54:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:39.775 21:54:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:39.775 21:54:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:39.775 21:54:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:39.775 21:54:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:39.775 21:54:50 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:39.775 21:54:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:39.775 21:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:39.775 21:54:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:39.775 21:54:50 -- target/discovery.sh@49 -- # check_bdevs= 00:08:39.775 21:54:50 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:39.775 21:54:50 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:39.775 21:54:50 -- target/discovery.sh@57 -- # nvmftestfini 00:08:39.775 21:54:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:39.775 21:54:50 -- nvmf/common.sh@116 -- # sync 00:08:39.775 21:54:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:39.775 21:54:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:39.775 21:54:50 -- nvmf/common.sh@119 -- # set +e 00:08:39.775 21:54:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:39.775 21:54:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:39.775 rmmod nvme_rdma 00:08:39.775 rmmod nvme_fabrics 00:08:39.775 21:54:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:39.775 21:54:50 -- nvmf/common.sh@123 -- # set -e 00:08:39.775 21:54:50 -- nvmf/common.sh@124 -- # return 0 00:08:39.775 21:54:50 -- nvmf/common.sh@477 -- # '[' -n 2044776 ']' 00:08:39.775 21:54:50 -- nvmf/common.sh@478 -- # killprocess 2044776 00:08:39.775 21:54:50 -- common/autotest_common.sh@926 -- # '[' -z 2044776 ']' 00:08:39.775 21:54:50 -- common/autotest_common.sh@930 -- # kill -0 2044776 00:08:39.775 21:54:50 -- common/autotest_common.sh@931 -- # uname 00:08:39.775 21:54:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:39.775 21:54:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044776 00:08:39.775 21:54:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:39.775 21:54:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:39.775 21:54:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044776' 00:08:39.775 killing process with pid 2044776 00:08:39.775 21:54:50 -- common/autotest_common.sh@945 -- # kill 2044776 00:08:39.775 [2024-07-26 21:54:50.987543] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:39.775 21:54:50 -- common/autotest_common.sh@950 -- # wait 2044776 00:08:40.035 21:54:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:40.035 21:54:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:40.035 00:08:40.035 real 0m9.592s 00:08:40.035 user 0m8.617s 00:08:40.035 sys 0m6.277s 00:08:40.035 21:54:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.035 21:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.035 ************************************ 00:08:40.035 END TEST nvmf_discovery 00:08:40.035 ************************************ 00:08:40.294 21:54:51 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:40.294 21:54:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:40.294 21:54:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.294 21:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:40.294 ************************************ 00:08:40.294 START TEST nvmf_referrals 00:08:40.294 ************************************ 00:08:40.294 21:54:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:40.294 * Looking for test storage... 00:08:40.294 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:40.294 21:54:51 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.294 21:54:51 -- nvmf/common.sh@7 -- # uname -s 00:08:40.294 21:54:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.294 21:54:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.294 21:54:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.294 21:54:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.294 21:54:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.294 21:54:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.294 21:54:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.294 21:54:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.294 21:54:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.294 21:54:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.294 21:54:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:40.294 21:54:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:40.294 21:54:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.294 21:54:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.294 21:54:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.294 21:54:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:40.294 21:54:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.294 21:54:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.294 21:54:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.294 21:54:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.295 21:54:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.295 21:54:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.295 21:54:51 -- paths/export.sh@5 -- # export PATH 00:08:40.295 21:54:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.295 21:54:51 -- nvmf/common.sh@46 -- # : 0 00:08:40.295 21:54:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:40.295 21:54:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:40.295 21:54:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:40.295 21:54:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.295 21:54:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.295 21:54:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:40.295 21:54:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:40.295 21:54:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:40.295 21:54:51 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:40.295 21:54:51 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:40.295 21:54:51 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:40.295 21:54:51 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:40.295 21:54:51 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:40.295 21:54:51 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:40.295 21:54:51 -- target/referrals.sh@37 -- # nvmftestinit 00:08:40.295 21:54:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:40.295 21:54:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.295 21:54:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:40.295 21:54:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:40.295 21:54:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:40.295 21:54:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.295 21:54:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.295 21:54:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.295 21:54:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:40.295 21:54:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:40.295 21:54:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:40.295 21:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:48.425 21:54:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:48.425 21:54:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:48.425 21:54:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:48.425 21:54:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:48.425 21:54:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:48.425 21:54:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:48.425 21:54:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:48.425 21:54:59 -- nvmf/common.sh@294 -- # net_devs=() 00:08:48.425 21:54:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:48.425 21:54:59 -- nvmf/common.sh@295 -- # e810=() 00:08:48.425 21:54:59 -- nvmf/common.sh@295 -- # local -ga e810 00:08:48.425 21:54:59 -- nvmf/common.sh@296 -- # x722=() 00:08:48.425 21:54:59 -- nvmf/common.sh@296 -- # local -ga x722 00:08:48.425 21:54:59 -- nvmf/common.sh@297 -- # mlx=() 00:08:48.425 21:54:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:48.425 21:54:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.425 21:54:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:48.425 21:54:59 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:48.425 21:54:59 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:48.425 21:54:59 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:48.425 21:54:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:48.425 21:54:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:48.425 21:54:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:48.425 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:48.425 21:54:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.425 21:54:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:48.425 21:54:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:48.425 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:48.425 21:54:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:48.425 21:54:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:48.425 21:54:59 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:48.425 21:54:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:48.425 21:54:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.425 21:54:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:48.425 21:54:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.426 21:54:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:48.426 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.426 21:54:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.426 21:54:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:48.426 21:54:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.426 21:54:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:48.426 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.426 21:54:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:48.426 21:54:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:48.426 21:54:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:48.426 21:54:59 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:48.426 21:54:59 -- nvmf/common.sh@57 -- # uname 00:08:48.426 21:54:59 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:48.426 21:54:59 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:48.426 21:54:59 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:48.426 21:54:59 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:48.426 21:54:59 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:48.426 21:54:59 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:48.426 21:54:59 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:48.426 21:54:59 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:48.426 21:54:59 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:48.426 21:54:59 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:48.426 21:54:59 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:48.426 21:54:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.426 21:54:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:48.426 21:54:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:48.426 21:54:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.426 21:54:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:48.426 21:54:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@104 -- # continue 2 00:08:48.426 21:54:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@104 -- # continue 2 00:08:48.426 21:54:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:48.426 21:54:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.426 21:54:59 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:48.426 21:54:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:48.426 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.426 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:48.426 altname enp217s0f0np0 00:08:48.426 altname ens818f0np0 00:08:48.426 inet 192.168.100.8/24 scope global mlx_0_0 00:08:48.426 valid_lft forever preferred_lft forever 00:08:48.426 21:54:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:48.426 21:54:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.426 21:54:59 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:48.426 21:54:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:48.426 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:48.426 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:48.426 altname enp217s0f1np1 00:08:48.426 altname ens818f1np1 00:08:48.426 inet 192.168.100.9/24 scope global mlx_0_1 00:08:48.426 valid_lft forever preferred_lft forever 00:08:48.426 21:54:59 -- nvmf/common.sh@410 -- # return 0 00:08:48.426 21:54:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:48.426 21:54:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:48.426 21:54:59 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:48.426 21:54:59 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:48.426 21:54:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:48.426 21:54:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:48.426 21:54:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:48.426 21:54:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:48.426 21:54:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:48.426 21:54:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@104 -- # continue 2 00:08:48.426 21:54:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:48.426 21:54:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:48.426 21:54:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@104 -- # continue 2 00:08:48.426 21:54:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:48.426 21:54:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.426 21:54:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:48.426 21:54:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:48.426 21:54:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:48.426 21:54:59 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:48.426 192.168.100.9' 00:08:48.426 21:54:59 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:48.426 192.168.100.9' 00:08:48.426 21:54:59 -- nvmf/common.sh@445 -- # head -n 1 00:08:48.426 21:54:59 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:48.426 21:54:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:48.426 192.168.100.9' 00:08:48.426 21:54:59 -- nvmf/common.sh@446 -- # tail -n +2 00:08:48.426 21:54:59 -- nvmf/common.sh@446 -- # head -n 1 00:08:48.426 21:54:59 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:48.426 21:54:59 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:48.426 21:54:59 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:48.426 21:54:59 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:48.426 21:54:59 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:48.426 21:54:59 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:48.426 21:54:59 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:48.426 21:54:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:48.426 21:54:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:48.426 21:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:48.426 21:54:59 -- nvmf/common.sh@469 -- # nvmfpid=2049040 00:08:48.426 21:54:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.426 21:54:59 -- nvmf/common.sh@470 -- # waitforlisten 2049040 00:08:48.426 21:54:59 -- common/autotest_common.sh@819 -- # '[' -z 2049040 ']' 00:08:48.426 21:54:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.426 21:54:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:48.426 21:54:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.426 21:54:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:48.426 21:54:59 -- common/autotest_common.sh@10 -- # set +x 00:08:48.426 [2024-07-26 21:54:59.520955] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:48.426 [2024-07-26 21:54:59.521011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.426 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.426 [2024-07-26 21:54:59.606503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.426 [2024-07-26 21:54:59.644641] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:48.426 [2024-07-26 21:54:59.644774] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.426 [2024-07-26 21:54:59.644784] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.426 [2024-07-26 21:54:59.644798] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.426 [2024-07-26 21:54:59.644845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.427 [2024-07-26 21:54:59.644938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.427 [2024-07-26 21:54:59.644961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:48.427 [2024-07-26 21:54:59.644962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.426 21:55:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:49.426 21:55:00 -- common/autotest_common.sh@852 -- # return 0 00:08:49.426 21:55:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:49.426 21:55:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 21:55:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.426 21:55:00 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 [2024-07-26 21:55:00.397019] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x246f4b0/0x24739a0) succeed. 00:08:49.426 [2024-07-26 21:55:00.407352] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2470aa0/0x24b5030) succeed. 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 [2024-07-26 21:55:00.531739] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.426 21:55:00 -- target/referrals.sh@48 -- # jq length 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:49.426 21:55:00 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:49.426 21:55:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:49.426 21:55:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.426 21:55:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:49.426 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.426 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.426 21:55:00 -- target/referrals.sh@21 -- # sort 00:08:49.426 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.426 21:55:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:49.426 21:55:00 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:49.685 21:55:00 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:49.685 21:55:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.685 21:55:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.685 21:55:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:49.685 21:55:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.685 21:55:00 -- target/referrals.sh@26 -- # sort 00:08:49.685 21:55:00 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:49.685 21:55:00 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:49.685 21:55:00 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:49.686 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.686 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.686 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.686 21:55:00 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:49.686 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.686 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.686 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.686 21:55:00 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:49.686 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.686 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.686 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.686 21:55:00 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.686 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.686 21:55:00 -- target/referrals.sh@56 -- # jq length 00:08:49.686 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.686 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.686 21:55:00 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:49.686 21:55:00 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:49.686 21:55:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.686 21:55:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.686 21:55:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:49.686 21:55:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.686 21:55:00 -- target/referrals.sh@26 -- # sort 00:08:49.945 21:55:00 -- target/referrals.sh@26 -- # echo 00:08:49.945 21:55:00 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:49.945 21:55:00 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:49.945 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.945 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.945 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.945 21:55:00 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:49.945 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.945 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.945 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.945 21:55:00 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:49.945 21:55:00 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:49.945 21:55:00 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:49.945 21:55:00 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:49.945 21:55:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:49.945 21:55:00 -- common/autotest_common.sh@10 -- # set +x 00:08:49.945 21:55:00 -- target/referrals.sh@21 -- # sort 00:08:49.945 21:55:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:49.945 21:55:00 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:49.945 21:55:00 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:49.945 21:55:00 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:49.945 21:55:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:49.945 21:55:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:49.945 21:55:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:49.946 21:55:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:49.946 21:55:01 -- target/referrals.sh@26 -- # sort 00:08:49.946 21:55:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:49.946 21:55:01 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:49.946 21:55:01 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:49.946 21:55:01 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:49.946 21:55:01 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:49.946 21:55:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:49.946 21:55:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:50.205 21:55:01 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:50.205 21:55:01 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:50.205 21:55:01 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:50.205 21:55:01 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:50.205 21:55:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.205 21:55:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:50.205 21:55:01 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:50.205 21:55:01 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:50.205 21:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.205 21:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:50.205 21:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.205 21:55:01 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:50.205 21:55:01 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:50.205 21:55:01 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.205 21:55:01 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:50.205 21:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.205 21:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:50.205 21:55:01 -- target/referrals.sh@21 -- # sort 00:08:50.205 21:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.205 21:55:01 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:50.205 21:55:01 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:50.205 21:55:01 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:50.205 21:55:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.205 21:55:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.205 21:55:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.205 21:55:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.205 21:55:01 -- target/referrals.sh@26 -- # sort 00:08:50.464 21:55:01 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:50.464 21:55:01 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:50.464 21:55:01 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:50.464 21:55:01 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:50.464 21:55:01 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:50.464 21:55:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.464 21:55:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:50.464 21:55:01 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:50.464 21:55:01 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:50.464 21:55:01 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:50.464 21:55:01 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:50.464 21:55:01 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.464 21:55:01 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:50.464 21:55:01 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:50.464 21:55:01 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:50.464 21:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.464 21:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:50.464 21:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.464 21:55:01 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:50.464 21:55:01 -- target/referrals.sh@82 -- # jq length 00:08:50.464 21:55:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:50.464 21:55:01 -- common/autotest_common.sh@10 -- # set +x 00:08:50.464 21:55:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:50.724 21:55:01 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:50.724 21:55:01 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:50.724 21:55:01 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:50.724 21:55:01 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:50.724 21:55:01 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:50.724 21:55:01 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:50.724 21:55:01 -- target/referrals.sh@26 -- # sort 00:08:50.724 21:55:01 -- target/referrals.sh@26 -- # echo 00:08:50.724 21:55:01 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:50.724 21:55:01 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:50.724 21:55:01 -- target/referrals.sh@86 -- # nvmftestfini 00:08:50.724 21:55:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:50.724 21:55:01 -- nvmf/common.sh@116 -- # sync 00:08:50.724 21:55:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:50.724 21:55:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:50.724 21:55:01 -- nvmf/common.sh@119 -- # set +e 00:08:50.724 21:55:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:50.724 21:55:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:50.724 rmmod nvme_rdma 00:08:50.724 rmmod nvme_fabrics 00:08:50.724 21:55:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:50.724 21:55:01 -- nvmf/common.sh@123 -- # set -e 00:08:50.724 21:55:01 -- nvmf/common.sh@124 -- # return 0 00:08:50.724 21:55:01 -- nvmf/common.sh@477 -- # '[' -n 2049040 ']' 00:08:50.724 21:55:01 -- nvmf/common.sh@478 -- # killprocess 2049040 00:08:50.724 21:55:01 -- common/autotest_common.sh@926 -- # '[' -z 2049040 ']' 00:08:50.724 21:55:01 -- common/autotest_common.sh@930 -- # kill -0 2049040 00:08:50.724 21:55:01 -- common/autotest_common.sh@931 -- # uname 00:08:50.724 21:55:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:50.724 21:55:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2049040 00:08:50.724 21:55:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:50.724 21:55:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:50.724 21:55:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2049040' 00:08:50.724 killing process with pid 2049040 00:08:50.724 21:55:01 -- common/autotest_common.sh@945 -- # kill 2049040 00:08:50.724 21:55:01 -- common/autotest_common.sh@950 -- # wait 2049040 00:08:50.984 21:55:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:50.984 21:55:02 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:50.984 00:08:50.984 real 0m10.870s 00:08:50.984 user 0m12.894s 00:08:50.984 sys 0m7.006s 00:08:50.984 21:55:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.984 21:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:50.984 ************************************ 00:08:50.984 END TEST nvmf_referrals 00:08:50.984 ************************************ 00:08:50.984 21:55:02 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:50.984 21:55:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.984 21:55:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.984 21:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:50.984 ************************************ 00:08:50.984 START TEST nvmf_connect_disconnect 00:08:50.984 ************************************ 00:08:50.984 21:55:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:51.244 * Looking for test storage... 00:08:51.244 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:51.244 21:55:02 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.244 21:55:02 -- nvmf/common.sh@7 -- # uname -s 00:08:51.244 21:55:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.244 21:55:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.244 21:55:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.244 21:55:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.244 21:55:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.244 21:55:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.244 21:55:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.244 21:55:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.244 21:55:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.244 21:55:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.244 21:55:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:51.244 21:55:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:51.244 21:55:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.244 21:55:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.244 21:55:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.244 21:55:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:51.244 21:55:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.244 21:55:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.244 21:55:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.244 21:55:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.244 21:55:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.244 21:55:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.244 21:55:02 -- paths/export.sh@5 -- # export PATH 00:08:51.244 21:55:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.244 21:55:02 -- nvmf/common.sh@46 -- # : 0 00:08:51.244 21:55:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:51.244 21:55:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:51.244 21:55:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:51.244 21:55:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.244 21:55:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.244 21:55:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:51.244 21:55:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:51.244 21:55:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:51.244 21:55:02 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.244 21:55:02 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.244 21:55:02 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:51.244 21:55:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:51.244 21:55:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.244 21:55:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:51.244 21:55:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:51.244 21:55:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:51.244 21:55:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.244 21:55:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.244 21:55:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.244 21:55:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:51.244 21:55:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:51.244 21:55:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:51.244 21:55:02 -- common/autotest_common.sh@10 -- # set +x 00:08:59.381 21:55:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:59.381 21:55:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:59.381 21:55:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:59.381 21:55:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:59.381 21:55:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:59.381 21:55:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:59.381 21:55:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:59.381 21:55:09 -- nvmf/common.sh@294 -- # net_devs=() 00:08:59.381 21:55:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:59.381 21:55:09 -- nvmf/common.sh@295 -- # e810=() 00:08:59.381 21:55:09 -- nvmf/common.sh@295 -- # local -ga e810 00:08:59.381 21:55:09 -- nvmf/common.sh@296 -- # x722=() 00:08:59.381 21:55:09 -- nvmf/common.sh@296 -- # local -ga x722 00:08:59.381 21:55:09 -- nvmf/common.sh@297 -- # mlx=() 00:08:59.381 21:55:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:59.381 21:55:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.381 21:55:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.381 21:55:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.381 21:55:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.381 21:55:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.381 21:55:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.382 21:55:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.382 21:55:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.382 21:55:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.382 21:55:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.382 21:55:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.382 21:55:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:59.382 21:55:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:59.382 21:55:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:59.382 21:55:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:59.382 21:55:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:59.382 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:59.382 21:55:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.382 21:55:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:59.382 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:59.382 21:55:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:59.382 21:55:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.382 21:55:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.382 21:55:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:59.382 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.382 21:55:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.382 21:55:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.382 21:55:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:59.382 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.382 21:55:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:59.382 21:55:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:59.382 21:55:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:59.382 21:55:09 -- nvmf/common.sh@57 -- # uname 00:08:59.382 21:55:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:59.382 21:55:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:59.382 21:55:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:59.382 21:55:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:59.382 21:55:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:59.382 21:55:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:59.382 21:55:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:59.382 21:55:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:59.382 21:55:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:59.382 21:55:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:59.382 21:55:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:59.382 21:55:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.382 21:55:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:59.382 21:55:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:59.382 21:55:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.382 21:55:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@104 -- # continue 2 00:08:59.382 21:55:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@104 -- # continue 2 00:08:59.382 21:55:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:59.382 21:55:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:59.382 21:55:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:59.382 21:55:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:59.382 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.382 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:59.382 altname enp217s0f0np0 00:08:59.382 altname ens818f0np0 00:08:59.382 inet 192.168.100.8/24 scope global mlx_0_0 00:08:59.382 valid_lft forever preferred_lft forever 00:08:59.382 21:55:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:59.382 21:55:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:59.382 21:55:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:59.382 21:55:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:59.382 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:59.382 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:59.382 altname enp217s0f1np1 00:08:59.382 altname ens818f1np1 00:08:59.382 inet 192.168.100.9/24 scope global mlx_0_1 00:08:59.382 valid_lft forever preferred_lft forever 00:08:59.382 21:55:09 -- nvmf/common.sh@410 -- # return 0 00:08:59.382 21:55:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:59.382 21:55:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:59.382 21:55:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:59.382 21:55:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:59.382 21:55:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:59.382 21:55:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:59.382 21:55:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:59.382 21:55:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:59.382 21:55:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:59.382 21:55:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@104 -- # continue 2 00:08:59.382 21:55:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:59.382 21:55:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:59.382 21:55:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@104 -- # continue 2 00:08:59.382 21:55:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:59.382 21:55:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:59.382 21:55:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:59.382 21:55:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:59.382 21:55:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:59.382 21:55:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:59.382 192.168.100.9' 00:08:59.382 21:55:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:59.382 192.168.100.9' 00:08:59.382 21:55:09 -- nvmf/common.sh@445 -- # head -n 1 00:08:59.382 21:55:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:59.382 21:55:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:59.382 192.168.100.9' 00:08:59.382 21:55:09 -- nvmf/common.sh@446 -- # tail -n +2 00:08:59.382 21:55:09 -- nvmf/common.sh@446 -- # head -n 1 00:08:59.382 21:55:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:59.382 21:55:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:59.382 21:55:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:59.382 21:55:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:59.382 21:55:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:59.382 21:55:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:59.382 21:55:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:59.382 21:55:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.382 21:55:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:59.382 21:55:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.382 21:55:10 -- nvmf/common.sh@469 -- # nvmfpid=2053604 00:08:59.382 21:55:10 -- nvmf/common.sh@470 -- # waitforlisten 2053604 00:08:59.382 21:55:10 -- common/autotest_common.sh@819 -- # '[' -z 2053604 ']' 00:08:59.382 21:55:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.382 21:55:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.382 21:55:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.382 21:55:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.382 21:55:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.383 21:55:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.383 [2024-07-26 21:55:10.067149] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:59.383 [2024-07-26 21:55:10.067200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.383 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.383 [2024-07-26 21:55:10.153746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.383 [2024-07-26 21:55:10.192468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.383 [2024-07-26 21:55:10.192574] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.383 [2024-07-26 21:55:10.192584] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.383 [2024-07-26 21:55:10.192593] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.383 [2024-07-26 21:55:10.192644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.383 [2024-07-26 21:55:10.192700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.383 [2024-07-26 21:55:10.192719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.383 [2024-07-26 21:55:10.192720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.641 21:55:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:59.641 21:55:10 -- common/autotest_common.sh@852 -- # return 0 00:08:59.641 21:55:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:59.641 21:55:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:59.641 21:55:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.900 21:55:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.900 21:55:10 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:59.900 21:55:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.900 21:55:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.900 [2024-07-26 21:55:10.915974] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:59.900 [2024-07-26 21:55:10.938385] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ab74b0/0x1abb9a0) succeed. 00:08:59.900 [2024-07-26 21:55:10.948667] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ab8aa0/0x1afd030) succeed. 00:08:59.900 21:55:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:59.900 21:55:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.900 21:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:59.900 21:55:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:59.900 21:55:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.900 21:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:59.900 21:55:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.900 21:55:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.900 21:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:59.900 21:55:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:59.900 21:55:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.900 21:55:11 -- common/autotest_common.sh@10 -- # set +x 00:08:59.900 [2024-07-26 21:55:11.089532] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:59.900 21:55:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:59.900 21:55:11 -- target/connect_disconnect.sh@34 -- # set +x 00:09:03.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.290 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.707 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.908 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.142 22:00:25 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:14.142 22:00:25 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:14.142 22:00:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:14.142 22:00:25 -- nvmf/common.sh@116 -- # sync 00:14:14.142 22:00:25 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:14.142 22:00:25 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:14.142 22:00:25 -- nvmf/common.sh@119 -- # set +e 00:14:14.142 22:00:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:14.142 22:00:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:14.142 rmmod nvme_rdma 00:14:14.142 rmmod nvme_fabrics 00:14:14.142 22:00:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:14.142 22:00:25 -- nvmf/common.sh@123 -- # set -e 00:14:14.142 22:00:25 -- nvmf/common.sh@124 -- # return 0 00:14:14.142 22:00:25 -- nvmf/common.sh@477 -- # '[' -n 2053604 ']' 00:14:14.142 22:00:25 -- nvmf/common.sh@478 -- # killprocess 2053604 00:14:14.142 22:00:25 -- common/autotest_common.sh@926 -- # '[' -z 2053604 ']' 00:14:14.142 22:00:25 -- common/autotest_common.sh@930 -- # kill -0 2053604 00:14:14.142 22:00:25 -- common/autotest_common.sh@931 -- # uname 00:14:14.142 22:00:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:14.142 22:00:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2053604 00:14:14.142 22:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:14.142 22:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:14.142 22:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2053604' 00:14:14.142 killing process with pid 2053604 00:14:14.142 22:00:25 -- common/autotest_common.sh@945 -- # kill 2053604 00:14:14.142 22:00:25 -- common/autotest_common.sh@950 -- # wait 2053604 00:14:14.400 22:00:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:14.400 22:00:25 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:14.400 00:14:14.400 real 5m23.363s 00:14:14.400 user 20m58.417s 00:14:14.400 sys 0m17.970s 00:14:14.400 22:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.400 22:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:14.400 ************************************ 00:14:14.400 END TEST nvmf_connect_disconnect 00:14:14.400 ************************************ 00:14:14.400 22:00:25 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.400 22:00:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:14.400 22:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.400 22:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:14.400 ************************************ 00:14:14.400 START TEST nvmf_multitarget 00:14:14.400 ************************************ 00:14:14.400 22:00:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:14.659 * Looking for test storage... 00:14:14.659 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:14.659 22:00:25 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.659 22:00:25 -- nvmf/common.sh@7 -- # uname -s 00:14:14.659 22:00:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.659 22:00:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.659 22:00:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.659 22:00:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.659 22:00:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.659 22:00:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.659 22:00:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.659 22:00:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.659 22:00:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.659 22:00:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.659 22:00:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:14.659 22:00:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:14.659 22:00:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.659 22:00:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.659 22:00:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.659 22:00:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:14.659 22:00:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.660 22:00:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.660 22:00:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.660 22:00:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.660 22:00:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.660 22:00:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.660 22:00:25 -- paths/export.sh@5 -- # export PATH 00:14:14.660 22:00:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.660 22:00:25 -- nvmf/common.sh@46 -- # : 0 00:14:14.660 22:00:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:14.660 22:00:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:14.660 22:00:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:14.660 22:00:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.660 22:00:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.660 22:00:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:14.660 22:00:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:14.660 22:00:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:14.660 22:00:25 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:14.660 22:00:25 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:14.660 22:00:25 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:14.660 22:00:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.660 22:00:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:14.660 22:00:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:14.660 22:00:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:14.660 22:00:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.660 22:00:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.660 22:00:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.660 22:00:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:14.660 22:00:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:14.660 22:00:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:14.660 22:00:25 -- common/autotest_common.sh@10 -- # set +x 00:14:22.781 22:00:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:22.781 22:00:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:22.781 22:00:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:22.781 22:00:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:22.781 22:00:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:22.781 22:00:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:22.781 22:00:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:22.781 22:00:33 -- nvmf/common.sh@294 -- # net_devs=() 00:14:22.781 22:00:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:22.781 22:00:33 -- nvmf/common.sh@295 -- # e810=() 00:14:22.781 22:00:33 -- nvmf/common.sh@295 -- # local -ga e810 00:14:22.781 22:00:33 -- nvmf/common.sh@296 -- # x722=() 00:14:22.781 22:00:33 -- nvmf/common.sh@296 -- # local -ga x722 00:14:22.781 22:00:33 -- nvmf/common.sh@297 -- # mlx=() 00:14:22.781 22:00:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:22.782 22:00:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:22.782 22:00:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:22.782 22:00:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:22.782 22:00:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:22.782 22:00:33 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:22.782 22:00:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:22.782 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:22.782 22:00:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:22.782 22:00:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:22.782 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:22.782 22:00:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:22.782 22:00:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.782 22:00:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.782 22:00:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:22.782 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:22.782 22:00:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.782 22:00:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:22.782 22:00:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:22.782 22:00:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:22.782 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:22.782 22:00:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:22.782 22:00:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:22.782 22:00:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:22.782 22:00:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:22.782 22:00:33 -- nvmf/common.sh@57 -- # uname 00:14:22.782 22:00:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:22.782 22:00:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:22.782 22:00:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:22.782 22:00:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:22.782 22:00:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:22.782 22:00:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:22.782 22:00:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:22.782 22:00:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:22.782 22:00:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:22.782 22:00:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:22.782 22:00:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:22.782 22:00:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:22.782 22:00:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:22.782 22:00:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:22.782 22:00:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:22.782 22:00:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:22.782 22:00:33 -- nvmf/common.sh@104 -- # continue 2 00:14:22.782 22:00:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.782 22:00:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:22.782 22:00:33 -- nvmf/common.sh@104 -- # continue 2 00:14:22.782 22:00:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:22.782 22:00:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:22.782 22:00:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:22.782 22:00:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:22.782 22:00:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.782 22:00:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.782 22:00:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:22.782 22:00:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:22.782 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:22.782 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:22.782 altname enp217s0f0np0 00:14:22.782 altname ens818f0np0 00:14:22.782 inet 192.168.100.8/24 scope global mlx_0_0 00:14:22.782 valid_lft forever preferred_lft forever 00:14:22.782 22:00:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:22.782 22:00:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:22.782 22:00:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:22.782 22:00:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:22.782 22:00:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.782 22:00:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.782 22:00:33 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:22.782 22:00:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:22.782 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:22.782 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:22.782 altname enp217s0f1np1 00:14:22.782 altname ens818f1np1 00:14:22.782 inet 192.168.100.9/24 scope global mlx_0_1 00:14:22.782 valid_lft forever preferred_lft forever 00:14:22.782 22:00:33 -- nvmf/common.sh@410 -- # return 0 00:14:22.782 22:00:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:22.782 22:00:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:22.782 22:00:33 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:22.782 22:00:33 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:22.782 22:00:33 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:22.782 22:00:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:22.782 22:00:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:22.782 22:00:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:22.782 22:00:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:22.782 22:00:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:22.782 22:00:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.783 22:00:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.783 22:00:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:22.783 22:00:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:22.783 22:00:33 -- nvmf/common.sh@104 -- # continue 2 00:14:22.783 22:00:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:22.783 22:00:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.783 22:00:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:22.783 22:00:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:22.783 22:00:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:22.783 22:00:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:22.783 22:00:33 -- nvmf/common.sh@104 -- # continue 2 00:14:22.783 22:00:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:22.783 22:00:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:22.783 22:00:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:22.783 22:00:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:22.783 22:00:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.783 22:00:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.783 22:00:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:22.783 22:00:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:22.783 22:00:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:22.783 22:00:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:22.783 22:00:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:22.783 22:00:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:22.783 22:00:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:22.783 192.168.100.9' 00:14:22.783 22:00:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:22.783 192.168.100.9' 00:14:22.783 22:00:33 -- nvmf/common.sh@445 -- # head -n 1 00:14:22.783 22:00:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:22.783 22:00:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:22.783 192.168.100.9' 00:14:22.783 22:00:33 -- nvmf/common.sh@446 -- # tail -n +2 00:14:22.783 22:00:33 -- nvmf/common.sh@446 -- # head -n 1 00:14:22.783 22:00:34 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:22.783 22:00:34 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:22.783 22:00:34 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:22.783 22:00:34 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:22.783 22:00:34 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:22.783 22:00:34 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:23.041 22:00:34 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:23.041 22:00:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:23.041 22:00:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:23.041 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:14:23.041 22:00:34 -- nvmf/common.sh@469 -- # nvmfpid=2114347 00:14:23.041 22:00:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.041 22:00:34 -- nvmf/common.sh@470 -- # waitforlisten 2114347 00:14:23.041 22:00:34 -- common/autotest_common.sh@819 -- # '[' -z 2114347 ']' 00:14:23.041 22:00:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.041 22:00:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:23.041 22:00:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.041 22:00:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:23.041 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:14:23.041 [2024-07-26 22:00:34.084149] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:23.041 [2024-07-26 22:00:34.084198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.041 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.041 [2024-07-26 22:00:34.169659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.041 [2024-07-26 22:00:34.208556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:23.041 [2024-07-26 22:00:34.208672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.041 [2024-07-26 22:00:34.208686] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.041 [2024-07-26 22:00:34.208696] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.041 [2024-07-26 22:00:34.208748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.041 [2024-07-26 22:00:34.208842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.041 [2024-07-26 22:00:34.208904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.041 [2024-07-26 22:00:34.208905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.974 22:00:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:23.974 22:00:34 -- common/autotest_common.sh@852 -- # return 0 00:14:23.974 22:00:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:23.974 22:00:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:23.974 22:00:34 -- common/autotest_common.sh@10 -- # set +x 00:14:23.974 22:00:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.974 22:00:34 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:23.974 22:00:34 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:23.974 22:00:34 -- target/multitarget.sh@21 -- # jq length 00:14:23.974 22:00:35 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:23.974 22:00:35 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:23.974 "nvmf_tgt_1" 00:14:23.974 22:00:35 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:24.232 "nvmf_tgt_2" 00:14:24.232 22:00:35 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:24.232 22:00:35 -- target/multitarget.sh@28 -- # jq length 00:14:24.232 22:00:35 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:24.232 22:00:35 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:24.232 true 00:14:24.490 22:00:35 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:24.490 true 00:14:24.490 22:00:35 -- target/multitarget.sh@35 -- # jq length 00:14:24.490 22:00:35 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:24.490 22:00:35 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:24.490 22:00:35 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:24.490 22:00:35 -- target/multitarget.sh@41 -- # nvmftestfini 00:14:24.490 22:00:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:24.490 22:00:35 -- nvmf/common.sh@116 -- # sync 00:14:24.490 22:00:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:24.490 22:00:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:24.490 22:00:35 -- nvmf/common.sh@119 -- # set +e 00:14:24.490 22:00:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:24.490 22:00:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:24.490 rmmod nvme_rdma 00:14:24.490 rmmod nvme_fabrics 00:14:24.490 22:00:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:24.781 22:00:35 -- nvmf/common.sh@123 -- # set -e 00:14:24.781 22:00:35 -- nvmf/common.sh@124 -- # return 0 00:14:24.781 22:00:35 -- nvmf/common.sh@477 -- # '[' -n 2114347 ']' 00:14:24.781 22:00:35 -- nvmf/common.sh@478 -- # killprocess 2114347 00:14:24.781 22:00:35 -- common/autotest_common.sh@926 -- # '[' -z 2114347 ']' 00:14:24.781 22:00:35 -- common/autotest_common.sh@930 -- # kill -0 2114347 00:14:24.781 22:00:35 -- common/autotest_common.sh@931 -- # uname 00:14:24.781 22:00:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:24.781 22:00:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2114347 00:14:24.781 22:00:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:24.781 22:00:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:24.781 22:00:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2114347' 00:14:24.781 killing process with pid 2114347 00:14:24.781 22:00:35 -- common/autotest_common.sh@945 -- # kill 2114347 00:14:24.781 22:00:35 -- common/autotest_common.sh@950 -- # wait 2114347 00:14:24.781 22:00:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:24.781 22:00:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:24.781 00:14:24.781 real 0m10.328s 00:14:24.781 user 0m9.976s 00:14:24.781 sys 0m6.815s 00:14:24.781 22:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.781 22:00:35 -- common/autotest_common.sh@10 -- # set +x 00:14:24.781 ************************************ 00:14:24.781 END TEST nvmf_multitarget 00:14:24.781 ************************************ 00:14:25.046 22:00:35 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:25.046 22:00:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:25.046 22:00:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.046 22:00:35 -- common/autotest_common.sh@10 -- # set +x 00:14:25.046 ************************************ 00:14:25.046 START TEST nvmf_rpc 00:14:25.046 ************************************ 00:14:25.046 22:00:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:14:25.046 * Looking for test storage... 00:14:25.046 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:25.046 22:00:36 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.046 22:00:36 -- nvmf/common.sh@7 -- # uname -s 00:14:25.046 22:00:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.046 22:00:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.046 22:00:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.046 22:00:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.046 22:00:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.046 22:00:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.046 22:00:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.046 22:00:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.046 22:00:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.046 22:00:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.046 22:00:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:25.046 22:00:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:25.046 22:00:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.046 22:00:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.046 22:00:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.046 22:00:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:25.046 22:00:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.046 22:00:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.046 22:00:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.046 22:00:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.046 22:00:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.047 22:00:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.047 22:00:36 -- paths/export.sh@5 -- # export PATH 00:14:25.047 22:00:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.047 22:00:36 -- nvmf/common.sh@46 -- # : 0 00:14:25.047 22:00:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:25.047 22:00:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:25.047 22:00:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:25.047 22:00:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.047 22:00:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.047 22:00:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:25.047 22:00:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:25.047 22:00:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:25.047 22:00:36 -- target/rpc.sh@11 -- # loops=5 00:14:25.047 22:00:36 -- target/rpc.sh@23 -- # nvmftestinit 00:14:25.047 22:00:36 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:25.047 22:00:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.047 22:00:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:25.047 22:00:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:25.047 22:00:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:25.047 22:00:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.047 22:00:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.047 22:00:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.047 22:00:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:25.047 22:00:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:25.047 22:00:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:25.047 22:00:36 -- common/autotest_common.sh@10 -- # set +x 00:14:33.170 22:00:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:33.170 22:00:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:33.170 22:00:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:33.170 22:00:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:33.170 22:00:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:33.170 22:00:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:33.170 22:00:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:33.170 22:00:43 -- nvmf/common.sh@294 -- # net_devs=() 00:14:33.170 22:00:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:33.170 22:00:43 -- nvmf/common.sh@295 -- # e810=() 00:14:33.170 22:00:43 -- nvmf/common.sh@295 -- # local -ga e810 00:14:33.170 22:00:43 -- nvmf/common.sh@296 -- # x722=() 00:14:33.170 22:00:43 -- nvmf/common.sh@296 -- # local -ga x722 00:14:33.170 22:00:43 -- nvmf/common.sh@297 -- # mlx=() 00:14:33.170 22:00:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:33.170 22:00:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.170 22:00:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:33.170 22:00:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:14:33.170 22:00:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:14:33.170 22:00:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:14:33.171 22:00:43 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:14:33.171 22:00:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:33.171 22:00:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:33.171 22:00:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:14:33.171 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:14:33.171 22:00:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.171 22:00:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:33.171 22:00:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:14:33.171 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:14:33.171 22:00:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:14:33.171 22:00:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:33.171 22:00:43 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:33.171 22:00:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.171 22:00:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:33.171 22:00:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.171 22:00:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:14:33.171 Found net devices under 0000:d9:00.0: mlx_0_0 00:14:33.171 22:00:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.171 22:00:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:33.171 22:00:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.171 22:00:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:33.171 22:00:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.171 22:00:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:14:33.171 Found net devices under 0000:d9:00.1: mlx_0_1 00:14:33.171 22:00:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.171 22:00:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:33.171 22:00:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:33.171 22:00:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:14:33.171 22:00:43 -- nvmf/common.sh@408 -- # rdma_device_init 00:14:33.171 22:00:43 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:14:33.171 22:00:43 -- nvmf/common.sh@57 -- # uname 00:14:33.171 22:00:43 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:14:33.171 22:00:43 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:14:33.171 22:00:43 -- nvmf/common.sh@62 -- # modprobe ib_core 00:14:33.171 22:00:43 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:14:33.171 22:00:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:14:33.171 22:00:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:14:33.171 22:00:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:14:33.171 22:00:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:14:33.171 22:00:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:14:33.171 22:00:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:33.171 22:00:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:14:33.171 22:00:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.171 22:00:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:33.171 22:00:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:33.171 22:00:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.171 22:00:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:33.171 22:00:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@104 -- # continue 2 00:14:33.171 22:00:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@104 -- # continue 2 00:14:33.171 22:00:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:33.171 22:00:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.171 22:00:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:14:33.171 22:00:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:14:33.171 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:33.171 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:14:33.171 altname enp217s0f0np0 00:14:33.171 altname ens818f0np0 00:14:33.171 inet 192.168.100.8/24 scope global mlx_0_0 00:14:33.171 valid_lft forever preferred_lft forever 00:14:33.171 22:00:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:14:33.171 22:00:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.171 22:00:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:14:33.171 22:00:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:14:33.171 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:33.171 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:14:33.171 altname enp217s0f1np1 00:14:33.171 altname ens818f1np1 00:14:33.171 inet 192.168.100.9/24 scope global mlx_0_1 00:14:33.171 valid_lft forever preferred_lft forever 00:14:33.171 22:00:44 -- nvmf/common.sh@410 -- # return 0 00:14:33.171 22:00:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.171 22:00:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:33.171 22:00:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:14:33.171 22:00:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:14:33.171 22:00:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:33.171 22:00:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:14:33.171 22:00:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:14:33.171 22:00:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:33.171 22:00:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:14:33.171 22:00:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@104 -- # continue 2 00:14:33.171 22:00:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:33.171 22:00:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:33.171 22:00:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@104 -- # continue 2 00:14:33.171 22:00:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:33.171 22:00:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.171 22:00:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:14:33.171 22:00:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:14:33.171 22:00:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:14:33.171 22:00:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:14:33.171 192.168.100.9' 00:14:33.171 22:00:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:14:33.171 192.168.100.9' 00:14:33.171 22:00:44 -- nvmf/common.sh@445 -- # head -n 1 00:14:33.171 22:00:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:33.171 22:00:44 -- nvmf/common.sh@446 -- # head -n 1 00:14:33.171 22:00:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:33.171 192.168.100.9' 00:14:33.171 22:00:44 -- nvmf/common.sh@446 -- # tail -n +2 00:14:33.171 22:00:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:33.171 22:00:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:14:33.171 22:00:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:33.171 22:00:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:14:33.171 22:00:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:14:33.171 22:00:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:14:33.171 22:00:44 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:33.171 22:00:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:33.171 22:00:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:33.172 22:00:44 -- common/autotest_common.sh@10 -- # set +x 00:14:33.172 22:00:44 -- nvmf/common.sh@469 -- # nvmfpid=2118748 00:14:33.172 22:00:44 -- nvmf/common.sh@470 -- # waitforlisten 2118748 00:14:33.172 22:00:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:33.172 22:00:44 -- common/autotest_common.sh@819 -- # '[' -z 2118748 ']' 00:14:33.172 22:00:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.172 22:00:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.172 22:00:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.172 22:00:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.172 22:00:44 -- common/autotest_common.sh@10 -- # set +x 00:14:33.172 [2024-07-26 22:00:44.258085] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:33.172 [2024-07-26 22:00:44.258139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.172 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.172 [2024-07-26 22:00:44.348381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.172 [2024-07-26 22:00:44.386370] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.172 [2024-07-26 22:00:44.386484] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.172 [2024-07-26 22:00:44.386493] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.172 [2024-07-26 22:00:44.386502] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.172 [2024-07-26 22:00:44.386544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.172 [2024-07-26 22:00:44.386652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.172 [2024-07-26 22:00:44.386717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.172 [2024-07-26 22:00:44.386720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.108 22:00:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.108 22:00:45 -- common/autotest_common.sh@852 -- # return 0 00:14:34.108 22:00:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:34.108 22:00:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:34.108 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.108 22:00:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.108 22:00:45 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:34.108 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.108 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.108 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.108 22:00:45 -- target/rpc.sh@26 -- # stats='{ 00:14:34.108 "tick_rate": 2500000000, 00:14:34.108 "poll_groups": [ 00:14:34.108 { 00:14:34.108 "name": "nvmf_tgt_poll_group_0", 00:14:34.108 "admin_qpairs": 0, 00:14:34.108 "io_qpairs": 0, 00:14:34.108 "current_admin_qpairs": 0, 00:14:34.108 "current_io_qpairs": 0, 00:14:34.108 "pending_bdev_io": 0, 00:14:34.108 "completed_nvme_io": 0, 00:14:34.108 "transports": [] 00:14:34.108 }, 00:14:34.108 { 00:14:34.108 "name": "nvmf_tgt_poll_group_1", 00:14:34.108 "admin_qpairs": 0, 00:14:34.108 "io_qpairs": 0, 00:14:34.108 "current_admin_qpairs": 0, 00:14:34.108 "current_io_qpairs": 0, 00:14:34.108 "pending_bdev_io": 0, 00:14:34.108 "completed_nvme_io": 0, 00:14:34.108 "transports": [] 00:14:34.108 }, 00:14:34.108 { 00:14:34.108 "name": "nvmf_tgt_poll_group_2", 00:14:34.108 "admin_qpairs": 0, 00:14:34.108 "io_qpairs": 0, 00:14:34.108 "current_admin_qpairs": 0, 00:14:34.108 "current_io_qpairs": 0, 00:14:34.108 "pending_bdev_io": 0, 00:14:34.108 "completed_nvme_io": 0, 00:14:34.108 "transports": [] 00:14:34.108 }, 00:14:34.108 { 00:14:34.108 "name": "nvmf_tgt_poll_group_3", 00:14:34.108 "admin_qpairs": 0, 00:14:34.108 "io_qpairs": 0, 00:14:34.108 "current_admin_qpairs": 0, 00:14:34.108 "current_io_qpairs": 0, 00:14:34.108 "pending_bdev_io": 0, 00:14:34.108 "completed_nvme_io": 0, 00:14:34.108 "transports": [] 00:14:34.108 } 00:14:34.108 ] 00:14:34.108 }' 00:14:34.108 22:00:45 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:34.108 22:00:45 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:34.108 22:00:45 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:34.108 22:00:45 -- target/rpc.sh@15 -- # wc -l 00:14:34.108 22:00:45 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:34.108 22:00:45 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:34.108 22:00:45 -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:34.108 22:00:45 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:34.108 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.108 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.108 [2024-07-26 22:00:45.246206] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22f1510/0x22f5a00) succeed. 00:14:34.108 [2024-07-26 22:00:45.256905] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22f2b00/0x2337090) succeed. 00:14:34.368 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.368 22:00:45 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:34.368 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.368 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.368 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.368 22:00:45 -- target/rpc.sh@33 -- # stats='{ 00:14:34.368 "tick_rate": 2500000000, 00:14:34.368 "poll_groups": [ 00:14:34.368 { 00:14:34.368 "name": "nvmf_tgt_poll_group_0", 00:14:34.368 "admin_qpairs": 0, 00:14:34.368 "io_qpairs": 0, 00:14:34.368 "current_admin_qpairs": 0, 00:14:34.368 "current_io_qpairs": 0, 00:14:34.368 "pending_bdev_io": 0, 00:14:34.368 "completed_nvme_io": 0, 00:14:34.368 "transports": [ 00:14:34.368 { 00:14:34.368 "trtype": "RDMA", 00:14:34.368 "pending_data_buffer": 0, 00:14:34.368 "devices": [ 00:14:34.368 { 00:14:34.368 "name": "mlx5_0", 00:14:34.368 "polls": 15626, 00:14:34.368 "idle_polls": 15626, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "mlx5_1", 00:14:34.368 "polls": 15626, 00:14:34.368 "idle_polls": 15626, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "nvmf_tgt_poll_group_1", 00:14:34.368 "admin_qpairs": 0, 00:14:34.368 "io_qpairs": 0, 00:14:34.368 "current_admin_qpairs": 0, 00:14:34.368 "current_io_qpairs": 0, 00:14:34.368 "pending_bdev_io": 0, 00:14:34.368 "completed_nvme_io": 0, 00:14:34.368 "transports": [ 00:14:34.368 { 00:14:34.368 "trtype": "RDMA", 00:14:34.368 "pending_data_buffer": 0, 00:14:34.368 "devices": [ 00:14:34.368 { 00:14:34.368 "name": "mlx5_0", 00:14:34.368 "polls": 9915, 00:14:34.368 "idle_polls": 9915, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "mlx5_1", 00:14:34.368 "polls": 9915, 00:14:34.368 "idle_polls": 9915, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "nvmf_tgt_poll_group_2", 00:14:34.368 "admin_qpairs": 0, 00:14:34.368 "io_qpairs": 0, 00:14:34.368 "current_admin_qpairs": 0, 00:14:34.368 "current_io_qpairs": 0, 00:14:34.368 "pending_bdev_io": 0, 00:14:34.368 "completed_nvme_io": 0, 00:14:34.368 "transports": [ 00:14:34.368 { 00:14:34.368 "trtype": "RDMA", 00:14:34.368 "pending_data_buffer": 0, 00:14:34.368 "devices": [ 00:14:34.368 { 00:14:34.368 "name": "mlx5_0", 00:14:34.368 "polls": 5699, 00:14:34.368 "idle_polls": 5699, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "mlx5_1", 00:14:34.368 "polls": 5699, 00:14:34.368 "idle_polls": 5699, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "nvmf_tgt_poll_group_3", 00:14:34.368 "admin_qpairs": 0, 00:14:34.368 "io_qpairs": 0, 00:14:34.368 "current_admin_qpairs": 0, 00:14:34.368 "current_io_qpairs": 0, 00:14:34.368 "pending_bdev_io": 0, 00:14:34.368 "completed_nvme_io": 0, 00:14:34.368 "transports": [ 00:14:34.368 { 00:14:34.368 "trtype": "RDMA", 00:14:34.368 "pending_data_buffer": 0, 00:14:34.368 "devices": [ 00:14:34.368 { 00:14:34.368 "name": "mlx5_0", 00:14:34.368 "polls": 878, 00:14:34.368 "idle_polls": 878, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 }, 00:14:34.368 { 00:14:34.368 "name": "mlx5_1", 00:14:34.368 "polls": 878, 00:14:34.368 "idle_polls": 878, 00:14:34.368 "completions": 0, 00:14:34.368 "requests": 0, 00:14:34.368 "request_latency": 0, 00:14:34.368 "pending_free_request": 0, 00:14:34.368 "pending_rdma_read": 0, 00:14:34.368 "pending_rdma_write": 0, 00:14:34.368 "pending_rdma_send": 0, 00:14:34.368 "total_send_wrs": 0, 00:14:34.368 "send_doorbell_updates": 0, 00:14:34.368 "total_recv_wrs": 4096, 00:14:34.368 "recv_doorbell_updates": 1 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 } 00:14:34.368 ] 00:14:34.368 }' 00:14:34.368 22:00:45 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:34.368 22:00:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:34.368 22:00:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:34.368 22:00:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:34.368 22:00:45 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:34.368 22:00:45 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:34.368 22:00:45 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:34.368 22:00:45 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:34.368 22:00:45 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:34.368 22:00:45 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:34.368 22:00:45 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:14:34.368 22:00:45 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:14:34.368 22:00:45 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:14:34.368 22:00:45 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:14:34.368 22:00:45 -- target/rpc.sh@15 -- # wc -l 00:14:34.368 22:00:45 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:14:34.368 22:00:45 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:14:34.628 22:00:45 -- target/rpc.sh@41 -- # transport_type=RDMA 00:14:34.628 22:00:45 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:14:34.628 22:00:45 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:14:34.628 22:00:45 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:14:34.628 22:00:45 -- target/rpc.sh@15 -- # wc -l 00:14:34.628 22:00:45 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:14:34.628 22:00:45 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:14:34.628 22:00:45 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:34.628 22:00:45 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:34.628 22:00:45 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:34.628 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.628 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.628 Malloc1 00:14:34.628 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.628 22:00:45 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:34.628 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.628 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.628 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.628 22:00:45 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:34.628 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.628 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.628 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.628 22:00:45 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:34.628 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.628 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.628 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.628 22:00:45 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:34.628 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.628 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.628 [2024-07-26 22:00:45.706021] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:34.628 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.628 22:00:45 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:34.628 22:00:45 -- common/autotest_common.sh@640 -- # local es=0 00:14:34.628 22:00:45 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:34.628 22:00:45 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:34.628 22:00:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.628 22:00:45 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:34.628 22:00:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.628 22:00:45 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:34.628 22:00:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:34.628 22:00:45 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:34.628 22:00:45 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:34.628 22:00:45 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:14:34.628 [2024-07-26 22:00:45.757949] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:34.628 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:34.628 could not add new controller: failed to write to nvme-fabrics device 00:14:34.628 22:00:45 -- common/autotest_common.sh@643 -- # es=1 00:14:34.628 22:00:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:34.628 22:00:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:34.628 22:00:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:34.628 22:00:45 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:34.628 22:00:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.628 22:00:45 -- common/autotest_common.sh@10 -- # set +x 00:14:34.628 22:00:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.628 22:00:45 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:35.567 22:00:46 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:35.567 22:00:46 -- common/autotest_common.sh@1177 -- # local i=0 00:14:35.567 22:00:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:35.567 22:00:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:35.567 22:00:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:38.105 22:00:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:38.105 22:00:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:38.105 22:00:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:38.105 22:00:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:38.105 22:00:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:38.105 22:00:48 -- common/autotest_common.sh@1187 -- # return 0 00:14:38.105 22:00:48 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:38.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.673 22:00:49 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:38.673 22:00:49 -- common/autotest_common.sh@1198 -- # local i=0 00:14:38.673 22:00:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:38.673 22:00:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.673 22:00:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:38.673 22:00:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:38.673 22:00:49 -- common/autotest_common.sh@1210 -- # return 0 00:14:38.673 22:00:49 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:38.673 22:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.673 22:00:49 -- common/autotest_common.sh@10 -- # set +x 00:14:38.673 22:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.674 22:00:49 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:38.674 22:00:49 -- common/autotest_common.sh@640 -- # local es=0 00:14:38.674 22:00:49 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:38.674 22:00:49 -- common/autotest_common.sh@628 -- # local arg=nvme 00:14:38.674 22:00:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.674 22:00:49 -- common/autotest_common.sh@632 -- # type -t nvme 00:14:38.674 22:00:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.674 22:00:49 -- common/autotest_common.sh@634 -- # type -P nvme 00:14:38.674 22:00:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.674 22:00:49 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:14:38.674 22:00:49 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:14:38.674 22:00:49 -- common/autotest_common.sh@643 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:38.674 [2024-07-26 22:00:49.860105] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:14:38.933 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:38.933 could not add new controller: failed to write to nvme-fabrics device 00:14:38.933 22:00:49 -- common/autotest_common.sh@643 -- # es=1 00:14:38.933 22:00:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:38.933 22:00:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:38.933 22:00:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:38.933 22:00:49 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:38.933 22:00:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.933 22:00:49 -- common/autotest_common.sh@10 -- # set +x 00:14:38.933 22:00:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.933 22:00:49 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:39.870 22:00:50 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.870 22:00:50 -- common/autotest_common.sh@1177 -- # local i=0 00:14:39.870 22:00:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.870 22:00:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:39.870 22:00:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:41.773 22:00:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:41.773 22:00:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:41.773 22:00:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.773 22:00:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:41.773 22:00:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.773 22:00:52 -- common/autotest_common.sh@1187 -- # return 0 00:14:41.773 22:00:52 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.708 22:00:53 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.708 22:00:53 -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.708 22:00:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:42.708 22:00:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.708 22:00:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:42.708 22:00:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.708 22:00:53 -- common/autotest_common.sh@1210 -- # return 0 00:14:42.708 22:00:53 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.708 22:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.708 22:00:53 -- common/autotest_common.sh@10 -- # set +x 00:14:42.708 22:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.966 22:00:53 -- target/rpc.sh@81 -- # seq 1 5 00:14:42.966 22:00:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:42.966 22:00:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:42.966 22:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.966 22:00:53 -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 22:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.966 22:00:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:42.966 22:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.966 22:00:53 -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 [2024-07-26 22:00:53.951469] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:42.966 22:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.966 22:00:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:42.966 22:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.966 22:00:53 -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 22:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.966 22:00:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:42.966 22:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.966 22:00:53 -- common/autotest_common.sh@10 -- # set +x 00:14:42.966 22:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.966 22:00:53 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:43.901 22:00:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:43.901 22:00:54 -- common/autotest_common.sh@1177 -- # local i=0 00:14:43.901 22:00:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.901 22:00:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:43.901 22:00:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:45.804 22:00:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:45.804 22:00:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:45.804 22:00:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.804 22:00:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:45.804 22:00:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.804 22:00:56 -- common/autotest_common.sh@1187 -- # return 0 00:14:45.804 22:00:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:46.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.742 22:00:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:46.742 22:00:57 -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.742 22:00:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:46.742 22:00:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.001 22:00:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:47.001 22:00:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.001 22:00:57 -- common/autotest_common.sh@1210 -- # return 0 00:14:47.001 22:00:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:47.001 22:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.001 22:00:57 -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 22:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.001 22:00:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.001 22:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.001 22:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 22:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.001 22:00:58 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:47.001 22:00:58 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.001 22:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.001 22:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 22:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.001 22:00:58 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:47.001 22:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.001 22:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 [2024-07-26 22:00:58.024647] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:47.001 22:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.001 22:00:58 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:47.001 22:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.001 22:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.001 22:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.001 22:00:58 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.001 22:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.002 22:00:58 -- common/autotest_common.sh@10 -- # set +x 00:14:47.002 22:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.002 22:00:58 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:47.940 22:00:59 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:47.940 22:00:59 -- common/autotest_common.sh@1177 -- # local i=0 00:14:47.940 22:00:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:47.940 22:00:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:47.940 22:00:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:49.899 22:01:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:49.899 22:01:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:49.899 22:01:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:49.899 22:01:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:49.899 22:01:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:49.899 22:01:01 -- common/autotest_common.sh@1187 -- # return 0 00:14:49.899 22:01:01 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.835 22:01:02 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.835 22:01:02 -- common/autotest_common.sh@1198 -- # local i=0 00:14:50.835 22:01:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:50.835 22:01:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.835 22:01:02 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:50.835 22:01:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.835 22:01:02 -- common/autotest_common.sh@1210 -- # return 0 00:14:50.835 22:01:02 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:50.835 22:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.835 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 22:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.095 22:01:02 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.095 22:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.095 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 22:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.095 22:01:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:51.095 22:01:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.095 22:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.095 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 22:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.095 22:01:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:51.095 22:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.095 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 [2024-07-26 22:01:02.089530] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:51.095 22:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.095 22:01:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:51.095 22:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.095 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 22:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.095 22:01:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.095 22:01:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.095 22:01:02 -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 22:01:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.095 22:01:02 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:52.033 22:01:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.033 22:01:03 -- common/autotest_common.sh@1177 -- # local i=0 00:14:52.033 22:01:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.033 22:01:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:52.033 22:01:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:53.937 22:01:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:53.937 22:01:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:53.937 22:01:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.937 22:01:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:53.937 22:01:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.937 22:01:05 -- common/autotest_common.sh@1187 -- # return 0 00:14:53.937 22:01:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.874 22:01:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.874 22:01:06 -- common/autotest_common.sh@1198 -- # local i=0 00:14:54.874 22:01:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:54.874 22:01:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.134 22:01:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:55.134 22:01:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:55.134 22:01:06 -- common/autotest_common.sh@1210 -- # return 0 00:14:55.134 22:01:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:55.134 22:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.134 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.134 22:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.134 22:01:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:55.134 22:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.134 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.134 22:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.134 22:01:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:55.134 22:01:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:55.134 22:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.134 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.134 22:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.134 22:01:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:55.134 22:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.134 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.134 [2024-07-26 22:01:06.153712] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:55.134 22:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.134 22:01:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:55.134 22:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.134 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.134 22:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.134 22:01:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:55.134 22:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:55.134 22:01:06 -- common/autotest_common.sh@10 -- # set +x 00:14:55.134 22:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:55.134 22:01:06 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.073 22:01:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.073 22:01:07 -- common/autotest_common.sh@1177 -- # local i=0 00:14:56.073 22:01:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.073 22:01:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:14:56.073 22:01:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:57.979 22:01:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:57.979 22:01:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:57.979 22:01:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.979 22:01:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:14:57.979 22:01:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.979 22:01:09 -- common/autotest_common.sh@1187 -- # return 0 00:14:57.979 22:01:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:58.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.176 22:01:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.176 22:01:10 -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.176 22:01:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:59.176 22:01:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.176 22:01:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:59.176 22:01:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.176 22:01:10 -- common/autotest_common.sh@1210 -- # return 0 00:14:59.176 22:01:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.176 22:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.176 22:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:59.176 22:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.176 22:01:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.176 22:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.176 22:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:59.176 22:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.176 22:01:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:59.176 22:01:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.176 22:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.176 22:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:59.176 22:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.176 22:01:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:59.176 22:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.176 22:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:59.177 [2024-07-26 22:01:10.214461] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:59.177 22:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.177 22:01:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:59.177 22:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.177 22:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:59.177 22:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.177 22:01:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.177 22:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:59.177 22:01:10 -- common/autotest_common.sh@10 -- # set +x 00:14:59.177 22:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:59.177 22:01:10 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:00.113 22:01:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.113 22:01:11 -- common/autotest_common.sh@1177 -- # local i=0 00:15:00.113 22:01:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.113 22:01:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:00.113 22:01:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:02.023 22:01:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:02.023 22:01:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:02.023 22:01:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.023 22:01:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:02.023 22:01:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.023 22:01:13 -- common/autotest_common.sh@1187 -- # return 0 00:15:02.023 22:01:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.399 22:01:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.399 22:01:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:03.399 22:01:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:03.399 22:01:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@1210 -- # return 0 00:15:03.399 22:01:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@99 -- # seq 1 5 00:15:03.399 22:01:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:03.399 22:01:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 [2024-07-26 22:01:14.281803] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:03.399 22:01:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 [2024-07-26 22:01:14.329933] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:03.399 22:01:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 [2024-07-26 22:01:14.378137] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.399 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.399 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.399 22:01:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:03.399 22:01:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.399 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 [2024-07-26 22:01:14.430316] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:03.400 22:01:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 [2024-07-26 22:01:14.478498] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:03.400 22:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.400 22:01:14 -- common/autotest_common.sh@10 -- # set +x 00:15:03.400 22:01:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.400 22:01:14 -- target/rpc.sh@110 -- # stats='{ 00:15:03.400 "tick_rate": 2500000000, 00:15:03.400 "poll_groups": [ 00:15:03.400 { 00:15:03.400 "name": "nvmf_tgt_poll_group_0", 00:15:03.400 "admin_qpairs": 2, 00:15:03.400 "io_qpairs": 27, 00:15:03.400 "current_admin_qpairs": 0, 00:15:03.400 "current_io_qpairs": 0, 00:15:03.400 "pending_bdev_io": 0, 00:15:03.400 "completed_nvme_io": 175, 00:15:03.400 "transports": [ 00:15:03.400 { 00:15:03.400 "trtype": "RDMA", 00:15:03.400 "pending_data_buffer": 0, 00:15:03.400 "devices": [ 00:15:03.400 { 00:15:03.400 "name": "mlx5_0", 00:15:03.400 "polls": 3407243, 00:15:03.400 "idle_polls": 3406843, 00:15:03.400 "completions": 459, 00:15:03.400 "requests": 229, 00:15:03.400 "request_latency": 49030394, 00:15:03.400 "pending_free_request": 0, 00:15:03.400 "pending_rdma_read": 0, 00:15:03.400 "pending_rdma_write": 0, 00:15:03.400 "pending_rdma_send": 0, 00:15:03.400 "total_send_wrs": 403, 00:15:03.400 "send_doorbell_updates": 195, 00:15:03.400 "total_recv_wrs": 4325, 00:15:03.400 "recv_doorbell_updates": 195 00:15:03.400 }, 00:15:03.400 { 00:15:03.400 "name": "mlx5_1", 00:15:03.400 "polls": 3407243, 00:15:03.400 "idle_polls": 3407243, 00:15:03.400 "completions": 0, 00:15:03.400 "requests": 0, 00:15:03.400 "request_latency": 0, 00:15:03.400 "pending_free_request": 0, 00:15:03.400 "pending_rdma_read": 0, 00:15:03.400 "pending_rdma_write": 0, 00:15:03.400 "pending_rdma_send": 0, 00:15:03.400 "total_send_wrs": 0, 00:15:03.400 "send_doorbell_updates": 0, 00:15:03.400 "total_recv_wrs": 4096, 00:15:03.400 "recv_doorbell_updates": 1 00:15:03.400 } 00:15:03.400 ] 00:15:03.400 } 00:15:03.400 ] 00:15:03.400 }, 00:15:03.400 { 00:15:03.400 "name": "nvmf_tgt_poll_group_1", 00:15:03.400 "admin_qpairs": 2, 00:15:03.400 "io_qpairs": 26, 00:15:03.400 "current_admin_qpairs": 0, 00:15:03.400 "current_io_qpairs": 0, 00:15:03.400 "pending_bdev_io": 0, 00:15:03.400 "completed_nvme_io": 70, 00:15:03.400 "transports": [ 00:15:03.400 { 00:15:03.400 "trtype": "RDMA", 00:15:03.400 "pending_data_buffer": 0, 00:15:03.400 "devices": [ 00:15:03.400 { 00:15:03.400 "name": "mlx5_0", 00:15:03.400 "polls": 3346306, 00:15:03.400 "idle_polls": 3346079, 00:15:03.400 "completions": 246, 00:15:03.400 "requests": 123, 00:15:03.400 "request_latency": 21095188, 00:15:03.400 "pending_free_request": 0, 00:15:03.400 "pending_rdma_read": 0, 00:15:03.400 "pending_rdma_write": 0, 00:15:03.400 "pending_rdma_send": 0, 00:15:03.400 "total_send_wrs": 192, 00:15:03.400 "send_doorbell_updates": 111, 00:15:03.400 "total_recv_wrs": 4219, 00:15:03.400 "recv_doorbell_updates": 112 00:15:03.400 }, 00:15:03.400 { 00:15:03.400 "name": "mlx5_1", 00:15:03.400 "polls": 3346306, 00:15:03.400 "idle_polls": 3346306, 00:15:03.400 "completions": 0, 00:15:03.400 "requests": 0, 00:15:03.400 "request_latency": 0, 00:15:03.400 "pending_free_request": 0, 00:15:03.400 "pending_rdma_read": 0, 00:15:03.400 "pending_rdma_write": 0, 00:15:03.400 "pending_rdma_send": 0, 00:15:03.400 "total_send_wrs": 0, 00:15:03.400 "send_doorbell_updates": 0, 00:15:03.400 "total_recv_wrs": 4096, 00:15:03.400 "recv_doorbell_updates": 1 00:15:03.400 } 00:15:03.400 ] 00:15:03.400 } 00:15:03.400 ] 00:15:03.400 }, 00:15:03.400 { 00:15:03.400 "name": "nvmf_tgt_poll_group_2", 00:15:03.400 "admin_qpairs": 1, 00:15:03.400 "io_qpairs": 26, 00:15:03.400 "current_admin_qpairs": 0, 00:15:03.400 "current_io_qpairs": 0, 00:15:03.400 "pending_bdev_io": 0, 00:15:03.400 "completed_nvme_io": 83, 00:15:03.400 "transports": [ 00:15:03.400 { 00:15:03.400 "trtype": "RDMA", 00:15:03.400 "pending_data_buffer": 0, 00:15:03.400 "devices": [ 00:15:03.400 { 00:15:03.400 "name": "mlx5_0", 00:15:03.400 "polls": 3432248, 00:15:03.400 "idle_polls": 3432044, 00:15:03.400 "completions": 225, 00:15:03.400 "requests": 112, 00:15:03.400 "request_latency": 20889110, 00:15:03.400 "pending_free_request": 0, 00:15:03.400 "pending_rdma_read": 0, 00:15:03.400 "pending_rdma_write": 0, 00:15:03.400 "pending_rdma_send": 0, 00:15:03.400 "total_send_wrs": 183, 00:15:03.400 "send_doorbell_updates": 100, 00:15:03.400 "total_recv_wrs": 4208, 00:15:03.400 "recv_doorbell_updates": 100 00:15:03.400 }, 00:15:03.400 { 00:15:03.400 "name": "mlx5_1", 00:15:03.400 "polls": 3432248, 00:15:03.400 "idle_polls": 3432248, 00:15:03.400 "completions": 0, 00:15:03.400 "requests": 0, 00:15:03.400 "request_latency": 0, 00:15:03.400 "pending_free_request": 0, 00:15:03.400 "pending_rdma_read": 0, 00:15:03.400 "pending_rdma_write": 0, 00:15:03.401 "pending_rdma_send": 0, 00:15:03.401 "total_send_wrs": 0, 00:15:03.401 "send_doorbell_updates": 0, 00:15:03.401 "total_recv_wrs": 4096, 00:15:03.401 "recv_doorbell_updates": 1 00:15:03.401 } 00:15:03.401 ] 00:15:03.401 } 00:15:03.401 ] 00:15:03.401 }, 00:15:03.401 { 00:15:03.401 "name": "nvmf_tgt_poll_group_3", 00:15:03.401 "admin_qpairs": 2, 00:15:03.401 "io_qpairs": 26, 00:15:03.401 "current_admin_qpairs": 0, 00:15:03.401 "current_io_qpairs": 0, 00:15:03.401 "pending_bdev_io": 0, 00:15:03.401 "completed_nvme_io": 127, 00:15:03.401 "transports": [ 00:15:03.401 { 00:15:03.401 "trtype": "RDMA", 00:15:03.401 "pending_data_buffer": 0, 00:15:03.401 "devices": [ 00:15:03.401 { 00:15:03.401 "name": "mlx5_0", 00:15:03.401 "polls": 2654229, 00:15:03.401 "idle_polls": 2653912, 00:15:03.401 "completions": 360, 00:15:03.401 "requests": 180, 00:15:03.401 "request_latency": 37216224, 00:15:03.401 "pending_free_request": 0, 00:15:03.401 "pending_rdma_read": 0, 00:15:03.401 "pending_rdma_write": 0, 00:15:03.401 "pending_rdma_send": 0, 00:15:03.401 "total_send_wrs": 306, 00:15:03.401 "send_doorbell_updates": 156, 00:15:03.401 "total_recv_wrs": 4276, 00:15:03.401 "recv_doorbell_updates": 157 00:15:03.401 }, 00:15:03.401 { 00:15:03.401 "name": "mlx5_1", 00:15:03.401 "polls": 2654229, 00:15:03.401 "idle_polls": 2654229, 00:15:03.401 "completions": 0, 00:15:03.401 "requests": 0, 00:15:03.401 "request_latency": 0, 00:15:03.401 "pending_free_request": 0, 00:15:03.401 "pending_rdma_read": 0, 00:15:03.401 "pending_rdma_write": 0, 00:15:03.401 "pending_rdma_send": 0, 00:15:03.401 "total_send_wrs": 0, 00:15:03.401 "send_doorbell_updates": 0, 00:15:03.401 "total_recv_wrs": 4096, 00:15:03.401 "recv_doorbell_updates": 1 00:15:03.401 } 00:15:03.401 ] 00:15:03.401 } 00:15:03.401 ] 00:15:03.401 } 00:15:03.401 ] 00:15:03.401 }' 00:15:03.401 22:01:14 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:03.401 22:01:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:03.401 22:01:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:03.401 22:01:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:03.401 22:01:14 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:03.401 22:01:14 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:03.401 22:01:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:03.401 22:01:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:03.401 22:01:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:03.660 22:01:14 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:03.660 22:01:14 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:03.660 22:01:14 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:03.660 22:01:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:03.660 22:01:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:03.660 22:01:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:03.660 22:01:14 -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:15:03.660 22:01:14 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:03.660 22:01:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:03.660 22:01:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:03.660 22:01:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:03.660 22:01:14 -- target/rpc.sh@118 -- # (( 128230916 > 0 )) 00:15:03.660 22:01:14 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:03.660 22:01:14 -- target/rpc.sh@123 -- # nvmftestfini 00:15:03.660 22:01:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:03.660 22:01:14 -- nvmf/common.sh@116 -- # sync 00:15:03.660 22:01:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:03.660 22:01:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:03.660 22:01:14 -- nvmf/common.sh@119 -- # set +e 00:15:03.660 22:01:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:03.660 22:01:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:03.660 rmmod nvme_rdma 00:15:03.660 rmmod nvme_fabrics 00:15:03.660 22:01:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:03.660 22:01:14 -- nvmf/common.sh@123 -- # set -e 00:15:03.660 22:01:14 -- nvmf/common.sh@124 -- # return 0 00:15:03.660 22:01:14 -- nvmf/common.sh@477 -- # '[' -n 2118748 ']' 00:15:03.660 22:01:14 -- nvmf/common.sh@478 -- # killprocess 2118748 00:15:03.660 22:01:14 -- common/autotest_common.sh@926 -- # '[' -z 2118748 ']' 00:15:03.660 22:01:14 -- common/autotest_common.sh@930 -- # kill -0 2118748 00:15:03.660 22:01:14 -- common/autotest_common.sh@931 -- # uname 00:15:03.660 22:01:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:03.660 22:01:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2118748 00:15:03.660 22:01:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:03.660 22:01:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:03.660 22:01:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2118748' 00:15:03.660 killing process with pid 2118748 00:15:03.660 22:01:14 -- common/autotest_common.sh@945 -- # kill 2118748 00:15:03.660 22:01:14 -- common/autotest_common.sh@950 -- # wait 2118748 00:15:03.918 22:01:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:03.918 22:01:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:03.918 00:15:03.918 real 0m39.137s 00:15:03.918 user 2m4.943s 00:15:03.918 sys 0m7.907s 00:15:03.918 22:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.918 22:01:15 -- common/autotest_common.sh@10 -- # set +x 00:15:03.918 ************************************ 00:15:03.918 END TEST nvmf_rpc 00:15:03.918 ************************************ 00:15:04.177 22:01:15 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:04.177 22:01:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:04.177 22:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:04.177 22:01:15 -- common/autotest_common.sh@10 -- # set +x 00:15:04.177 ************************************ 00:15:04.177 START TEST nvmf_invalid 00:15:04.177 ************************************ 00:15:04.177 22:01:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:04.177 * Looking for test storage... 00:15:04.177 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:04.177 22:01:15 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.177 22:01:15 -- nvmf/common.sh@7 -- # uname -s 00:15:04.177 22:01:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.177 22:01:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.177 22:01:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.177 22:01:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.177 22:01:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.177 22:01:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.177 22:01:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.177 22:01:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.177 22:01:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.177 22:01:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.177 22:01:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:04.177 22:01:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:04.177 22:01:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.177 22:01:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.177 22:01:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.177 22:01:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:04.177 22:01:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.177 22:01:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.177 22:01:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.177 22:01:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.177 22:01:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.177 22:01:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.177 22:01:15 -- paths/export.sh@5 -- # export PATH 00:15:04.178 22:01:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.178 22:01:15 -- nvmf/common.sh@46 -- # : 0 00:15:04.178 22:01:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:04.178 22:01:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:04.178 22:01:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:04.178 22:01:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.178 22:01:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.178 22:01:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:04.178 22:01:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:04.178 22:01:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:04.178 22:01:15 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:04.178 22:01:15 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:04.178 22:01:15 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:04.178 22:01:15 -- target/invalid.sh@14 -- # target=foobar 00:15:04.178 22:01:15 -- target/invalid.sh@16 -- # RANDOM=0 00:15:04.178 22:01:15 -- target/invalid.sh@34 -- # nvmftestinit 00:15:04.178 22:01:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:04.178 22:01:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.178 22:01:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:04.178 22:01:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:04.178 22:01:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:04.178 22:01:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.178 22:01:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.178 22:01:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.178 22:01:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:04.178 22:01:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:04.178 22:01:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:04.178 22:01:15 -- common/autotest_common.sh@10 -- # set +x 00:15:14.161 22:01:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:14.161 22:01:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:14.161 22:01:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:14.161 22:01:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:14.161 22:01:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:14.161 22:01:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:14.161 22:01:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:14.161 22:01:23 -- nvmf/common.sh@294 -- # net_devs=() 00:15:14.161 22:01:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:14.161 22:01:23 -- nvmf/common.sh@295 -- # e810=() 00:15:14.161 22:01:23 -- nvmf/common.sh@295 -- # local -ga e810 00:15:14.161 22:01:23 -- nvmf/common.sh@296 -- # x722=() 00:15:14.161 22:01:23 -- nvmf/common.sh@296 -- # local -ga x722 00:15:14.161 22:01:23 -- nvmf/common.sh@297 -- # mlx=() 00:15:14.161 22:01:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:14.161 22:01:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.161 22:01:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:14.161 22:01:23 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:14.161 22:01:23 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:14.161 22:01:23 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:14.161 22:01:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:14.161 22:01:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:14.161 22:01:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:14.161 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:14.161 22:01:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:14.161 22:01:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:14.161 22:01:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:14.161 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:14.161 22:01:23 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:14.161 22:01:23 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:14.162 22:01:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:14.162 22:01:23 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.162 22:01:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:14.162 22:01:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.162 22:01:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:14.162 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.162 22:01:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.162 22:01:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:14.162 22:01:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.162 22:01:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:14.162 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.162 22:01:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:14.162 22:01:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:14.162 22:01:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:14.162 22:01:23 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:14.162 22:01:23 -- nvmf/common.sh@57 -- # uname 00:15:14.162 22:01:23 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:14.162 22:01:23 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:14.162 22:01:23 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:14.162 22:01:23 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:14.162 22:01:23 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:14.162 22:01:23 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:14.162 22:01:23 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:14.162 22:01:23 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:14.162 22:01:23 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:14.162 22:01:23 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:14.162 22:01:23 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:14.162 22:01:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:14.162 22:01:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:14.162 22:01:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:14.162 22:01:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:14.162 22:01:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:14.162 22:01:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@104 -- # continue 2 00:15:14.162 22:01:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@104 -- # continue 2 00:15:14.162 22:01:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:14.162 22:01:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:14.162 22:01:23 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:14.162 22:01:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:14.162 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:14.162 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:14.162 altname enp217s0f0np0 00:15:14.162 altname ens818f0np0 00:15:14.162 inet 192.168.100.8/24 scope global mlx_0_0 00:15:14.162 valid_lft forever preferred_lft forever 00:15:14.162 22:01:23 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:14.162 22:01:23 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:14.162 22:01:23 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:14.162 22:01:23 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:14.162 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:14.162 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:14.162 altname enp217s0f1np1 00:15:14.162 altname ens818f1np1 00:15:14.162 inet 192.168.100.9/24 scope global mlx_0_1 00:15:14.162 valid_lft forever preferred_lft forever 00:15:14.162 22:01:23 -- nvmf/common.sh@410 -- # return 0 00:15:14.162 22:01:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:14.162 22:01:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:14.162 22:01:23 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:14.162 22:01:23 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:14.162 22:01:23 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:14.162 22:01:23 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:14.162 22:01:23 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:14.162 22:01:23 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:14.162 22:01:23 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:14.162 22:01:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@104 -- # continue 2 00:15:14.162 22:01:23 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:14.162 22:01:23 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:14.162 22:01:23 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@104 -- # continue 2 00:15:14.162 22:01:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:14.162 22:01:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:14.162 22:01:23 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:14.162 22:01:23 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:14.162 22:01:23 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:14.162 22:01:23 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:14.162 192.168.100.9' 00:15:14.162 22:01:23 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:14.162 192.168.100.9' 00:15:14.162 22:01:23 -- nvmf/common.sh@445 -- # head -n 1 00:15:14.162 22:01:23 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:14.162 22:01:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:14.162 192.168.100.9' 00:15:14.162 22:01:23 -- nvmf/common.sh@446 -- # tail -n +2 00:15:14.162 22:01:23 -- nvmf/common.sh@446 -- # head -n 1 00:15:14.162 22:01:23 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:14.162 22:01:23 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:14.162 22:01:23 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:14.162 22:01:23 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:14.162 22:01:23 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:14.162 22:01:23 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:14.162 22:01:23 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:14.162 22:01:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:14.162 22:01:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:14.162 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 22:01:23 -- nvmf/common.sh@469 -- # nvmfpid=2128346 00:15:14.162 22:01:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.162 22:01:23 -- nvmf/common.sh@470 -- # waitforlisten 2128346 00:15:14.162 22:01:23 -- common/autotest_common.sh@819 -- # '[' -z 2128346 ']' 00:15:14.162 22:01:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.162 22:01:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:14.163 22:01:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.163 22:01:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:14.163 22:01:23 -- common/autotest_common.sh@10 -- # set +x 00:15:14.163 [2024-07-26 22:01:23.926949] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:14.163 [2024-07-26 22:01:23.927001] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.163 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.163 [2024-07-26 22:01:24.012013] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.163 [2024-07-26 22:01:24.050059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:14.163 [2024-07-26 22:01:24.050189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.163 [2024-07-26 22:01:24.050199] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.163 [2024-07-26 22:01:24.050208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.163 [2024-07-26 22:01:24.050262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.163 [2024-07-26 22:01:24.050358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.163 [2024-07-26 22:01:24.050441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.163 [2024-07-26 22:01:24.050442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.163 22:01:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:14.163 22:01:24 -- common/autotest_common.sh@852 -- # return 0 00:15:14.163 22:01:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:14.163 22:01:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:14.163 22:01:24 -- common/autotest_common.sh@10 -- # set +x 00:15:14.163 22:01:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.163 22:01:24 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:14.163 22:01:24 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16585 00:15:14.163 [2024-07-26 22:01:24.915375] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:14.163 22:01:24 -- target/invalid.sh@40 -- # out='request: 00:15:14.163 { 00:15:14.163 "nqn": "nqn.2016-06.io.spdk:cnode16585", 00:15:14.163 "tgt_name": "foobar", 00:15:14.163 "method": "nvmf_create_subsystem", 00:15:14.163 "req_id": 1 00:15:14.163 } 00:15:14.163 Got JSON-RPC error response 00:15:14.163 response: 00:15:14.163 { 00:15:14.163 "code": -32603, 00:15:14.163 "message": "Unable to find target foobar" 00:15:14.163 }' 00:15:14.163 22:01:24 -- target/invalid.sh@41 -- # [[ request: 00:15:14.163 { 00:15:14.163 "nqn": "nqn.2016-06.io.spdk:cnode16585", 00:15:14.163 "tgt_name": "foobar", 00:15:14.163 "method": "nvmf_create_subsystem", 00:15:14.163 "req_id": 1 00:15:14.163 } 00:15:14.163 Got JSON-RPC error response 00:15:14.163 response: 00:15:14.163 { 00:15:14.163 "code": -32603, 00:15:14.163 "message": "Unable to find target foobar" 00:15:14.163 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:14.163 22:01:24 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:14.163 22:01:24 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8421 00:15:14.163 [2024-07-26 22:01:25.108091] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8421: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:14.163 22:01:25 -- target/invalid.sh@45 -- # out='request: 00:15:14.163 { 00:15:14.163 "nqn": "nqn.2016-06.io.spdk:cnode8421", 00:15:14.163 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:14.163 "method": "nvmf_create_subsystem", 00:15:14.163 "req_id": 1 00:15:14.163 } 00:15:14.163 Got JSON-RPC error response 00:15:14.163 response: 00:15:14.163 { 00:15:14.163 "code": -32602, 00:15:14.163 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:14.163 }' 00:15:14.163 22:01:25 -- target/invalid.sh@46 -- # [[ request: 00:15:14.163 { 00:15:14.163 "nqn": "nqn.2016-06.io.spdk:cnode8421", 00:15:14.163 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:14.163 "method": "nvmf_create_subsystem", 00:15:14.163 "req_id": 1 00:15:14.163 } 00:15:14.163 Got JSON-RPC error response 00:15:14.163 response: 00:15:14.163 { 00:15:14.163 "code": -32602, 00:15:14.163 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:14.163 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:14.163 22:01:25 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:14.163 22:01:25 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7006 00:15:14.163 [2024-07-26 22:01:25.300686] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7006: invalid model number 'SPDK_Controller' 00:15:14.163 22:01:25 -- target/invalid.sh@50 -- # out='request: 00:15:14.163 { 00:15:14.163 "nqn": "nqn.2016-06.io.spdk:cnode7006", 00:15:14.163 "model_number": "SPDK_Controller\u001f", 00:15:14.163 "method": "nvmf_create_subsystem", 00:15:14.163 "req_id": 1 00:15:14.163 } 00:15:14.163 Got JSON-RPC error response 00:15:14.163 response: 00:15:14.163 { 00:15:14.163 "code": -32602, 00:15:14.163 "message": "Invalid MN SPDK_Controller\u001f" 00:15:14.163 }' 00:15:14.163 22:01:25 -- target/invalid.sh@51 -- # [[ request: 00:15:14.163 { 00:15:14.163 "nqn": "nqn.2016-06.io.spdk:cnode7006", 00:15:14.163 "model_number": "SPDK_Controller\u001f", 00:15:14.163 "method": "nvmf_create_subsystem", 00:15:14.163 "req_id": 1 00:15:14.163 } 00:15:14.163 Got JSON-RPC error response 00:15:14.163 response: 00:15:14.163 { 00:15:14.163 "code": -32602, 00:15:14.163 "message": "Invalid MN SPDK_Controller\u001f" 00:15:14.163 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:14.163 22:01:25 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:14.163 22:01:25 -- target/invalid.sh@19 -- # local length=21 ll 00:15:14.163 22:01:25 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:14.163 22:01:25 -- target/invalid.sh@21 -- # local chars 00:15:14.163 22:01:25 -- target/invalid.sh@22 -- # local string 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # printf %x 68 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # string+=D 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # printf %x 115 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # string+=s 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # printf %x 51 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # string+=3 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # printf %x 41 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # string+=')' 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # printf %x 41 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # string+=')' 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # printf %x 60 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:14.163 22:01:25 -- target/invalid.sh@25 -- # string+='<' 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.163 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # printf %x 84 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # string+=T 00:15:14.463 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.463 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # printf %x 97 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # string+=a 00:15:14.463 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.463 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # printf %x 51 00:15:14.463 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=3 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 78 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=N 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 78 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=N 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 125 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+='}' 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 72 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x48' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=H 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 104 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=h 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 106 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=j 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 54 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=6 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 89 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=Y 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 40 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+='(' 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 68 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=D 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 120 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=x 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # printf %x 100 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:14.464 22:01:25 -- target/invalid.sh@25 -- # string+=d 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.464 22:01:25 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.464 22:01:25 -- target/invalid.sh@28 -- # [[ D == \- ]] 00:15:14.464 22:01:25 -- target/invalid.sh@31 -- # echo 'Ds3)) /dev/null' 00:15:17.343 22:01:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.343 22:01:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:17.343 22:01:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:17.343 22:01:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:17.343 22:01:28 -- common/autotest_common.sh@10 -- # set +x 00:15:25.468 22:01:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.468 22:01:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:25.468 22:01:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:25.468 22:01:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:25.468 22:01:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:25.468 22:01:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:25.468 22:01:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:25.468 22:01:36 -- nvmf/common.sh@294 -- # net_devs=() 00:15:25.468 22:01:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:25.468 22:01:36 -- nvmf/common.sh@295 -- # e810=() 00:15:25.468 22:01:36 -- nvmf/common.sh@295 -- # local -ga e810 00:15:25.468 22:01:36 -- nvmf/common.sh@296 -- # x722=() 00:15:25.468 22:01:36 -- nvmf/common.sh@296 -- # local -ga x722 00:15:25.468 22:01:36 -- nvmf/common.sh@297 -- # mlx=() 00:15:25.468 22:01:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:25.468 22:01:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.468 22:01:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.469 22:01:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.469 22:01:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.469 22:01:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:25.469 22:01:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:25.469 22:01:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:25.469 22:01:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:25.469 22:01:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:25.469 22:01:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:25.469 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:25.469 22:01:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:25.469 22:01:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:25.469 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:25.469 22:01:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:25.469 22:01:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:25.469 22:01:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.469 22:01:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:25.469 22:01:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.469 22:01:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:25.469 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:25.469 22:01:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.469 22:01:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.469 22:01:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:25.469 22:01:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.469 22:01:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:25.469 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:25.469 22:01:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.469 22:01:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:25.469 22:01:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:25.469 22:01:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:25.469 22:01:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:25.469 22:01:36 -- nvmf/common.sh@57 -- # uname 00:15:25.469 22:01:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:25.469 22:01:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:25.469 22:01:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:25.469 22:01:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:25.469 22:01:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:25.469 22:01:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:25.469 22:01:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:25.469 22:01:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:25.469 22:01:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:25.469 22:01:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:25.469 22:01:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:25.469 22:01:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:25.469 22:01:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:25.469 22:01:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:25.469 22:01:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:25.469 22:01:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:25.469 22:01:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:25.469 22:01:36 -- nvmf/common.sh@104 -- # continue 2 00:15:25.469 22:01:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:25.469 22:01:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:25.469 22:01:36 -- nvmf/common.sh@104 -- # continue 2 00:15:25.469 22:01:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:25.469 22:01:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:25.469 22:01:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:25.469 22:01:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:25.469 22:01:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:25.469 22:01:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:25.469 22:01:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:25.469 22:01:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:25.469 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:25.469 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:25.469 altname enp217s0f0np0 00:15:25.469 altname ens818f0np0 00:15:25.469 inet 192.168.100.8/24 scope global mlx_0_0 00:15:25.469 valid_lft forever preferred_lft forever 00:15:25.469 22:01:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:25.469 22:01:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:25.469 22:01:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:25.469 22:01:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:25.469 22:01:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:25.469 22:01:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:25.469 22:01:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:25.469 22:01:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:25.469 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:25.469 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:25.469 altname enp217s0f1np1 00:15:25.469 altname ens818f1np1 00:15:25.469 inet 192.168.100.9/24 scope global mlx_0_1 00:15:25.469 valid_lft forever preferred_lft forever 00:15:25.469 22:01:36 -- nvmf/common.sh@410 -- # return 0 00:15:25.469 22:01:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.469 22:01:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:25.469 22:01:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:25.469 22:01:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:25.469 22:01:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:25.469 22:01:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:25.469 22:01:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:25.469 22:01:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:25.469 22:01:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:25.729 22:01:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:25.729 22:01:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:25.729 22:01:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:25.729 22:01:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:25.729 22:01:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:25.729 22:01:36 -- nvmf/common.sh@104 -- # continue 2 00:15:25.729 22:01:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:25.729 22:01:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:25.729 22:01:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:25.729 22:01:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:25.729 22:01:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:25.729 22:01:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:25.729 22:01:36 -- nvmf/common.sh@104 -- # continue 2 00:15:25.729 22:01:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:25.729 22:01:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:25.729 22:01:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:25.729 22:01:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:25.729 22:01:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:25.729 22:01:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:25.729 22:01:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:25.729 22:01:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:25.729 22:01:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:25.729 22:01:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:25.729 22:01:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:25.729 22:01:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:25.729 22:01:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:25.729 192.168.100.9' 00:15:25.729 22:01:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:25.729 192.168.100.9' 00:15:25.729 22:01:36 -- nvmf/common.sh@445 -- # head -n 1 00:15:25.729 22:01:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:25.729 22:01:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:25.729 192.168.100.9' 00:15:25.729 22:01:36 -- nvmf/common.sh@446 -- # tail -n +2 00:15:25.729 22:01:36 -- nvmf/common.sh@446 -- # head -n 1 00:15:25.729 22:01:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:25.729 22:01:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:25.729 22:01:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:25.729 22:01:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:25.729 22:01:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:25.729 22:01:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:25.729 22:01:36 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:25.729 22:01:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:25.729 22:01:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:25.729 22:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:25.729 22:01:36 -- nvmf/common.sh@469 -- # nvmfpid=2133267 00:15:25.729 22:01:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:25.729 22:01:36 -- nvmf/common.sh@470 -- # waitforlisten 2133267 00:15:25.729 22:01:36 -- common/autotest_common.sh@819 -- # '[' -z 2133267 ']' 00:15:25.729 22:01:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.729 22:01:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.729 22:01:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.729 22:01:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.729 22:01:36 -- common/autotest_common.sh@10 -- # set +x 00:15:25.729 [2024-07-26 22:01:36.820014] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:25.729 [2024-07-26 22:01:36.820068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.729 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.729 [2024-07-26 22:01:36.905176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.729 [2024-07-26 22:01:36.941058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:25.729 [2024-07-26 22:01:36.941189] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.729 [2024-07-26 22:01:36.941199] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.729 [2024-07-26 22:01:36.941208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.729 [2024-07-26 22:01:36.941331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.729 [2024-07-26 22:01:36.941357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.729 [2024-07-26 22:01:36.941359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.665 22:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:26.665 22:01:37 -- common/autotest_common.sh@852 -- # return 0 00:15:26.665 22:01:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:26.665 22:01:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 22:01:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.665 22:01:37 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 [2024-07-26 22:01:37.694071] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15eacd0/0x15ef1c0) succeed. 00:15:26.665 [2024-07-26 22:01:37.704068] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15ec220/0x1630850) succeed. 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 Malloc0 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 Delay0 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 [2024-07-26 22:01:37.865898] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:26.665 22:01:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:26.665 22:01:37 -- common/autotest_common.sh@10 -- # set +x 00:15:26.665 22:01:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:26.665 22:01:37 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:26.924 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.924 [2024-07-26 22:01:37.958873] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:29.461 Initializing NVMe Controllers 00:15:29.461 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:29.461 controller IO queue size 128 less than required 00:15:29.461 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:15:29.461 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:15:29.461 Initialization complete. Launching workers. 00:15:29.461 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51105 00:15:29.461 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51166, failed to submit 62 00:15:29.461 success 51105, unsuccess 61, failed 0 00:15:29.461 22:01:40 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:29.461 22:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:29.461 22:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:29.461 22:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:29.461 22:01:40 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:29.461 22:01:40 -- target/abort.sh@38 -- # nvmftestfini 00:15:29.461 22:01:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:29.461 22:01:40 -- nvmf/common.sh@116 -- # sync 00:15:29.461 22:01:40 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:29.461 22:01:40 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:29.461 22:01:40 -- nvmf/common.sh@119 -- # set +e 00:15:29.461 22:01:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:29.461 22:01:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:29.461 rmmod nvme_rdma 00:15:29.461 rmmod nvme_fabrics 00:15:29.461 22:01:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:29.461 22:01:40 -- nvmf/common.sh@123 -- # set -e 00:15:29.461 22:01:40 -- nvmf/common.sh@124 -- # return 0 00:15:29.461 22:01:40 -- nvmf/common.sh@477 -- # '[' -n 2133267 ']' 00:15:29.461 22:01:40 -- nvmf/common.sh@478 -- # killprocess 2133267 00:15:29.461 22:01:40 -- common/autotest_common.sh@926 -- # '[' -z 2133267 ']' 00:15:29.461 22:01:40 -- common/autotest_common.sh@930 -- # kill -0 2133267 00:15:29.461 22:01:40 -- common/autotest_common.sh@931 -- # uname 00:15:29.461 22:01:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:29.461 22:01:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2133267 00:15:29.461 22:01:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:29.461 22:01:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:29.461 22:01:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2133267' 00:15:29.461 killing process with pid 2133267 00:15:29.461 22:01:40 -- common/autotest_common.sh@945 -- # kill 2133267 00:15:29.461 22:01:40 -- common/autotest_common.sh@950 -- # wait 2133267 00:15:29.462 22:01:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:29.462 22:01:40 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:29.462 00:15:29.462 real 0m12.142s 00:15:29.462 user 0m14.869s 00:15:29.462 sys 0m6.883s 00:15:29.462 22:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.462 22:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:29.462 ************************************ 00:15:29.462 END TEST nvmf_abort 00:15:29.462 ************************************ 00:15:29.462 22:01:40 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:29.462 22:01:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:29.462 22:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:29.462 22:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:29.462 ************************************ 00:15:29.462 START TEST nvmf_ns_hotplug_stress 00:15:29.462 ************************************ 00:15:29.462 22:01:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:15:29.462 * Looking for test storage... 00:15:29.462 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:29.462 22:01:40 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.462 22:01:40 -- nvmf/common.sh@7 -- # uname -s 00:15:29.462 22:01:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.462 22:01:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.462 22:01:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.462 22:01:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.462 22:01:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.462 22:01:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.462 22:01:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.462 22:01:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.462 22:01:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.462 22:01:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.462 22:01:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:29.462 22:01:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:29.462 22:01:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.462 22:01:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.462 22:01:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.462 22:01:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:29.462 22:01:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.462 22:01:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.462 22:01:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.462 22:01:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.462 22:01:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.462 22:01:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.462 22:01:40 -- paths/export.sh@5 -- # export PATH 00:15:29.462 22:01:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.462 22:01:40 -- nvmf/common.sh@46 -- # : 0 00:15:29.462 22:01:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:29.462 22:01:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:29.462 22:01:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:29.462 22:01:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.462 22:01:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.462 22:01:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:29.462 22:01:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:29.462 22:01:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:29.462 22:01:40 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:29.462 22:01:40 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:15:29.462 22:01:40 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:29.462 22:01:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.462 22:01:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:29.462 22:01:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:29.462 22:01:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:29.462 22:01:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.462 22:01:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.462 22:01:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.462 22:01:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:29.462 22:01:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:29.462 22:01:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:29.462 22:01:40 -- common/autotest_common.sh@10 -- # set +x 00:15:37.588 22:01:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:37.588 22:01:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:37.588 22:01:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:37.588 22:01:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:37.588 22:01:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:37.588 22:01:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:37.588 22:01:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:37.588 22:01:48 -- nvmf/common.sh@294 -- # net_devs=() 00:15:37.588 22:01:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:37.588 22:01:48 -- nvmf/common.sh@295 -- # e810=() 00:15:37.588 22:01:48 -- nvmf/common.sh@295 -- # local -ga e810 00:15:37.588 22:01:48 -- nvmf/common.sh@296 -- # x722=() 00:15:37.588 22:01:48 -- nvmf/common.sh@296 -- # local -ga x722 00:15:37.588 22:01:48 -- nvmf/common.sh@297 -- # mlx=() 00:15:37.588 22:01:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:37.588 22:01:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.588 22:01:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:37.588 22:01:48 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:37.588 22:01:48 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:37.588 22:01:48 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:37.588 22:01:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:37.588 22:01:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:37.588 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:37.588 22:01:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:37.588 22:01:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:37.588 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:37.588 22:01:48 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:37.588 22:01:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:37.588 22:01:48 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.588 22:01:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:37.588 22:01:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.588 22:01:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:37.588 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:37.588 22:01:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.588 22:01:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.588 22:01:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:37.588 22:01:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.588 22:01:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:37.588 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:37.588 22:01:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.588 22:01:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:37.588 22:01:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:37.588 22:01:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:37.588 22:01:48 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:37.588 22:01:48 -- nvmf/common.sh@57 -- # uname 00:15:37.588 22:01:48 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:37.588 22:01:48 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:37.588 22:01:48 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:37.588 22:01:48 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:37.588 22:01:48 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:37.588 22:01:48 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:37.588 22:01:48 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:37.588 22:01:48 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:37.588 22:01:48 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:37.588 22:01:48 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:37.588 22:01:48 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:37.588 22:01:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:37.588 22:01:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:37.588 22:01:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:37.588 22:01:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:37.588 22:01:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:37.588 22:01:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:37.588 22:01:48 -- nvmf/common.sh@104 -- # continue 2 00:15:37.588 22:01:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.588 22:01:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:37.588 22:01:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:37.588 22:01:48 -- nvmf/common.sh@104 -- # continue 2 00:15:37.588 22:01:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:37.849 22:01:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:37.849 22:01:48 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:37.849 22:01:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:37.849 22:01:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:37.849 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:37.849 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:37.849 altname enp217s0f0np0 00:15:37.849 altname ens818f0np0 00:15:37.849 inet 192.168.100.8/24 scope global mlx_0_0 00:15:37.849 valid_lft forever preferred_lft forever 00:15:37.849 22:01:48 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:37.849 22:01:48 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:37.849 22:01:48 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:37.849 22:01:48 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:37.849 22:01:48 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:37.849 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:37.849 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:37.849 altname enp217s0f1np1 00:15:37.849 altname ens818f1np1 00:15:37.849 inet 192.168.100.9/24 scope global mlx_0_1 00:15:37.849 valid_lft forever preferred_lft forever 00:15:37.849 22:01:48 -- nvmf/common.sh@410 -- # return 0 00:15:37.849 22:01:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:37.849 22:01:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:37.849 22:01:48 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:37.849 22:01:48 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:37.849 22:01:48 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:37.849 22:01:48 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:37.849 22:01:48 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:37.849 22:01:48 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:37.849 22:01:48 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:37.849 22:01:48 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:37.849 22:01:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:37.849 22:01:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.849 22:01:48 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:37.849 22:01:48 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@104 -- # continue 2 00:15:37.849 22:01:48 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:37.849 22:01:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.849 22:01:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:37.849 22:01:48 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.849 22:01:48 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:37.849 22:01:48 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@104 -- # continue 2 00:15:37.849 22:01:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:37.849 22:01:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:37.849 22:01:48 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:37.849 22:01:48 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:37.849 22:01:48 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:37.849 22:01:48 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:37.849 192.168.100.9' 00:15:37.849 22:01:48 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:37.849 192.168.100.9' 00:15:37.849 22:01:48 -- nvmf/common.sh@445 -- # head -n 1 00:15:37.849 22:01:48 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:37.849 22:01:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:37.849 192.168.100.9' 00:15:37.849 22:01:48 -- nvmf/common.sh@446 -- # tail -n +2 00:15:37.849 22:01:48 -- nvmf/common.sh@446 -- # head -n 1 00:15:37.849 22:01:48 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:37.849 22:01:48 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:37.849 22:01:48 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:37.849 22:01:48 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:37.849 22:01:48 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:37.849 22:01:48 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:37.849 22:01:48 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:15:37.849 22:01:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:37.849 22:01:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:37.849 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 22:01:48 -- nvmf/common.sh@469 -- # nvmfpid=2138013 00:15:37.849 22:01:48 -- nvmf/common.sh@470 -- # waitforlisten 2138013 00:15:37.849 22:01:48 -- common/autotest_common.sh@819 -- # '[' -z 2138013 ']' 00:15:37.849 22:01:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.849 22:01:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:37.849 22:01:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.849 22:01:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:37.849 22:01:48 -- common/autotest_common.sh@10 -- # set +x 00:15:37.849 22:01:48 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:37.849 [2024-07-26 22:01:49.003708] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:37.849 [2024-07-26 22:01:49.003757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.849 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.108 [2024-07-26 22:01:49.090014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:38.108 [2024-07-26 22:01:49.127503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.108 [2024-07-26 22:01:49.127629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.108 [2024-07-26 22:01:49.127639] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.108 [2024-07-26 22:01:49.127648] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.108 [2024-07-26 22:01:49.127687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.108 [2024-07-26 22:01:49.127778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.108 [2024-07-26 22:01:49.127779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.675 22:01:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:38.675 22:01:49 -- common/autotest_common.sh@852 -- # return 0 00:15:38.675 22:01:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:38.675 22:01:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:38.675 22:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:38.675 22:01:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.675 22:01:49 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:15:38.675 22:01:49 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:38.933 [2024-07-26 22:01:50.033815] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x214ecd0/0x21531c0) succeed. 00:15:38.933 [2024-07-26 22:01:50.044127] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2150220/0x2194850) succeed. 00:15:39.191 22:01:50 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:39.191 22:01:50 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:39.450 [2024-07-26 22:01:50.495388] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:39.450 22:01:50 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:39.708 22:01:50 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:15:39.708 Malloc0 00:15:39.708 22:01:50 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:39.968 Delay0 00:15:39.968 22:01:51 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:40.226 22:01:51 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:15:40.227 NULL1 00:15:40.227 22:01:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:40.485 22:01:51 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2138433 00:15:40.485 22:01:51 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:15:40.485 22:01:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:40.485 22:01:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:40.485 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.479 Read completed with error (sct=0, sc=11) 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 22:01:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:41.737 22:01:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:15:41.737 22:01:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:15:41.996 true 00:15:41.996 22:01:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:41.996 22:01:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 22:01:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:42.942 22:01:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:15:42.942 22:01:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:15:43.201 true 00:15:43.201 22:01:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:43.201 22:01:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 22:01:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:44.135 22:01:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:15:44.135 22:01:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:15:44.394 true 00:15:44.394 22:01:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:44.394 22:01:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 22:01:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:45.331 22:01:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:15:45.331 22:01:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:15:45.589 true 00:15:45.589 22:01:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:45.589 22:01:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 22:01:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:46.527 22:01:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:15:46.527 22:01:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:15:46.786 true 00:15:46.786 22:01:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:46.786 22:01:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:47.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.722 22:01:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.722 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:47.722 22:01:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:15:47.722 22:01:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:15:47.980 true 00:15:47.980 22:01:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:47.980 22:01:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 22:01:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:48.916 22:02:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:15:48.916 22:02:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:15:49.174 true 00:15:49.174 22:02:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:49.174 22:02:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 22:02:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:50.109 22:02:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:15:50.109 22:02:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:15:50.368 true 00:15:50.368 22:02:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:50.368 22:02:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 22:02:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:51.303 22:02:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:15:51.303 22:02:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:15:51.561 true 00:15:51.561 22:02:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:51.561 22:02:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 22:02:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:52.496 22:02:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:15:52.496 22:02:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:15:52.755 true 00:15:52.755 22:02:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:52.755 22:02:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 22:02:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:53.692 22:02:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:15:53.692 22:02:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:15:53.951 true 00:15:53.951 22:02:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:53.951 22:02:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:54.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:54.889 22:02:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:54.889 22:02:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:15:54.889 22:02:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:15:55.148 true 00:15:55.148 22:02:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:55.148 22:02:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.408 22:02:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:55.408 22:02:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:15:55.408 22:02:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:15:55.667 true 00:15:55.667 22:02:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:55.667 22:02:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 22:02:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.045 22:02:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:15:57.045 22:02:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:15:57.045 true 00:15:57.045 22:02:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:57.045 22:02:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:57.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.986 22:02:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:57.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:57.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:58.245 22:02:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:15:58.245 22:02:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:15:58.245 true 00:15:58.245 22:02:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:58.245 22:02:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 22:02:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:15:59.468 22:02:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:15:59.468 22:02:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:15:59.468 true 00:15:59.468 22:02:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:15:59.468 22:02:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.405 22:02:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.405 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:00.664 22:02:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:00.664 22:02:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:00.664 true 00:16:00.664 22:02:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:00.664 22:02:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 22:02:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:01.601 22:02:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:01.601 22:02:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:01.861 true 00:16:01.861 22:02:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:01.861 22:02:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.806 22:02:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:02.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:02.806 22:02:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:02.806 22:02:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:03.065 true 00:16:03.065 22:02:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:03.065 22:02:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.004 22:02:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:04.263 22:02:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:04.263 22:02:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:04.263 true 00:16:04.263 22:02:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:04.263 22:02:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:05.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.202 22:02:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:05.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.461 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:05.461 22:02:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:05.461 22:02:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:05.461 true 00:16:05.461 22:02:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:05.461 22:02:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.399 22:02:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:06.658 22:02:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:06.658 22:02:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:06.658 true 00:16:06.658 22:02:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:06.659 22:02:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.596 22:02:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:07.854 22:02:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:07.854 22:02:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:07.854 true 00:16:07.854 22:02:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:07.854 22:02:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.791 22:02:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:08.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.048 22:02:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:09.048 22:02:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:09.048 true 00:16:09.048 22:02:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:09.048 22:02:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:09.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:09.983 22:02:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:09.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:10.242 22:02:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:10.242 22:02:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:10.500 true 00:16:10.500 22:02:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:10.500 22:02:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.436 22:02:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.436 22:02:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:11.436 22:02:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:11.695 true 00:16:11.695 22:02:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:11.695 22:02:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.695 22:02:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:11.953 22:02:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:11.953 22:02:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:12.213 true 00:16:12.213 22:02:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:12.213 22:02:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.472 22:02:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.472 22:02:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:12.472 22:02:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:12.732 true 00:16:12.732 22:02:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:12.732 22:02:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:12.991 22:02:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.991 22:02:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:12.991 22:02:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:13.251 true 00:16:13.251 22:02:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:13.251 22:02:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.251 Initializing NVMe Controllers 00:16:13.251 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:13.251 Controller IO queue size 128, less than required. 00:16:13.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:13.251 Controller IO queue size 128, less than required. 00:16:13.251 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:13.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:13.251 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:13.251 Initialization complete. Launching workers. 00:16:13.251 ======================================================== 00:16:13.251 Latency(us) 00:16:13.251 Device Information : IOPS MiB/s Average min max 00:16:13.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5570.53 2.72 20460.09 884.89 1133154.65 00:16:13.251 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34862.57 17.02 3671.42 1830.80 281674.04 00:16:13.251 ======================================================== 00:16:13.251 Total : 40433.10 19.74 5984.42 884.89 1133154.65 00:16:13.251 00:16:13.251 22:02:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.510 22:02:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:13.510 22:02:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:13.769 true 00:16:13.769 22:02:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2138433 00:16:13.769 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2138433) - No such process 00:16:13.769 22:02:24 -- target/ns_hotplug_stress.sh@53 -- # wait 2138433 00:16:13.769 22:02:24 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.769 22:02:24 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:14.027 22:02:25 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:14.028 22:02:25 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:14.028 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:14.028 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:14.028 22:02:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:14.287 null0 00:16:14.287 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:14.287 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:14.287 22:02:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:14.287 null1 00:16:14.546 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:14.546 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:14.546 22:02:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:14.546 null2 00:16:14.546 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:14.546 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:14.546 22:02:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:14.805 null3 00:16:14.805 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:14.805 22:02:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:14.805 22:02:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:15.103 null4 00:16:15.103 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:15.103 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:15.103 22:02:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:15.103 null5 00:16:15.103 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:15.103 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:15.103 22:02:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:15.390 null6 00:16:15.390 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:15.391 null7 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:15.391 22:02:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@66 -- # wait 2144600 2144601 2144603 2144605 2144608 2144609 2144612 2144614 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:15.651 22:02:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:15.910 22:02:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:15.910 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:15.910 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:15.910 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.169 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.170 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:16.429 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.688 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:16.689 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:16.949 22:02:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:16.949 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.208 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:17.468 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.728 22:02:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:17.988 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:17.988 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:17.988 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.988 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:17.988 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:17.988 22:02:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:17.988 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:18.247 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:18.507 22:02:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:18.767 22:02:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.027 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.286 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:19.287 22:02:30 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:19.287 22:02:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:19.287 22:02:30 -- nvmf/common.sh@116 -- # sync 00:16:19.287 22:02:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:19.287 22:02:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:19.287 22:02:30 -- nvmf/common.sh@119 -- # set +e 00:16:19.287 22:02:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:19.287 22:02:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:19.287 rmmod nvme_rdma 00:16:19.287 rmmod nvme_fabrics 00:16:19.287 22:02:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:19.287 22:02:30 -- nvmf/common.sh@123 -- # set -e 00:16:19.287 22:02:30 -- nvmf/common.sh@124 -- # return 0 00:16:19.287 22:02:30 -- nvmf/common.sh@477 -- # '[' -n 2138013 ']' 00:16:19.287 22:02:30 -- nvmf/common.sh@478 -- # killprocess 2138013 00:16:19.287 22:02:30 -- common/autotest_common.sh@926 -- # '[' -z 2138013 ']' 00:16:19.287 22:02:30 -- common/autotest_common.sh@930 -- # kill -0 2138013 00:16:19.287 22:02:30 -- common/autotest_common.sh@931 -- # uname 00:16:19.287 22:02:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:19.287 22:02:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2138013 00:16:19.287 22:02:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:19.287 22:02:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:19.287 22:02:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2138013' 00:16:19.287 killing process with pid 2138013 00:16:19.287 22:02:30 -- common/autotest_common.sh@945 -- # kill 2138013 00:16:19.287 22:02:30 -- common/autotest_common.sh@950 -- # wait 2138013 00:16:19.546 22:02:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:19.546 22:02:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:19.546 00:16:19.546 real 0m50.136s 00:16:19.546 user 3m16.400s 00:16:19.546 sys 0m15.798s 00:16:19.546 22:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.546 22:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:19.546 ************************************ 00:16:19.546 END TEST nvmf_ns_hotplug_stress 00:16:19.546 ************************************ 00:16:19.546 22:02:30 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:19.546 22:02:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:19.546 22:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:19.546 22:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:19.546 ************************************ 00:16:19.546 START TEST nvmf_connect_stress 00:16:19.546 ************************************ 00:16:19.546 22:02:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:19.546 * Looking for test storage... 00:16:19.805 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:19.805 22:02:30 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.805 22:02:30 -- nvmf/common.sh@7 -- # uname -s 00:16:19.805 22:02:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.805 22:02:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.805 22:02:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.805 22:02:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.805 22:02:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.805 22:02:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.805 22:02:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.805 22:02:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.805 22:02:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.805 22:02:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.805 22:02:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:19.805 22:02:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:19.805 22:02:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.805 22:02:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.805 22:02:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.805 22:02:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:19.805 22:02:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.805 22:02:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.805 22:02:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.805 22:02:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.806 22:02:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.806 22:02:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.806 22:02:30 -- paths/export.sh@5 -- # export PATH 00:16:19.806 22:02:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.806 22:02:30 -- nvmf/common.sh@46 -- # : 0 00:16:19.806 22:02:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:19.806 22:02:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:19.806 22:02:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:19.806 22:02:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.806 22:02:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.806 22:02:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:19.806 22:02:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:19.806 22:02:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:19.806 22:02:30 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:19.806 22:02:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:19.806 22:02:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.806 22:02:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:19.806 22:02:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:19.806 22:02:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:19.806 22:02:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.806 22:02:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.806 22:02:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.806 22:02:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:19.806 22:02:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:19.806 22:02:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:19.806 22:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:27.923 22:02:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:27.923 22:02:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:27.923 22:02:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:27.923 22:02:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:27.923 22:02:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:27.923 22:02:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:27.923 22:02:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:27.923 22:02:38 -- nvmf/common.sh@294 -- # net_devs=() 00:16:27.923 22:02:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:27.923 22:02:38 -- nvmf/common.sh@295 -- # e810=() 00:16:27.923 22:02:38 -- nvmf/common.sh@295 -- # local -ga e810 00:16:27.923 22:02:38 -- nvmf/common.sh@296 -- # x722=() 00:16:27.923 22:02:38 -- nvmf/common.sh@296 -- # local -ga x722 00:16:27.923 22:02:38 -- nvmf/common.sh@297 -- # mlx=() 00:16:27.923 22:02:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:27.923 22:02:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:27.923 22:02:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:27.923 22:02:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:27.923 22:02:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:27.923 22:02:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:27.923 22:02:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:27.924 22:02:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:27.924 22:02:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:27.924 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:27.924 22:02:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:27.924 22:02:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:27.924 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:27.924 22:02:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:27.924 22:02:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:27.924 22:02:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.924 22:02:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:27.924 22:02:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.924 22:02:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:27.924 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:27.924 22:02:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.924 22:02:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:27.924 22:02:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:27.924 22:02:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:27.924 22:02:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:27.924 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:27.924 22:02:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:27.924 22:02:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:27.924 22:02:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:27.924 22:02:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:27.924 22:02:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:27.924 22:02:38 -- nvmf/common.sh@57 -- # uname 00:16:27.924 22:02:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:27.924 22:02:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:27.924 22:02:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:27.924 22:02:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:27.924 22:02:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:27.924 22:02:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:27.924 22:02:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:27.924 22:02:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:27.924 22:02:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:27.924 22:02:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:27.924 22:02:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:27.924 22:02:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:27.924 22:02:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:27.924 22:02:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:27.924 22:02:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:27.924 22:02:38 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:27.924 22:02:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:27.924 22:02:38 -- nvmf/common.sh@104 -- # continue 2 00:16:27.924 22:02:38 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:27.924 22:02:38 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:27.924 22:02:38 -- nvmf/common.sh@104 -- # continue 2 00:16:27.924 22:02:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:27.924 22:02:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:27.924 22:02:38 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:27.924 22:02:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:27.924 22:02:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:27.924 22:02:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:27.924 22:02:38 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:27.924 22:02:38 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:27.924 22:02:38 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:27.924 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:27.924 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:27.924 altname enp217s0f0np0 00:16:27.924 altname ens818f0np0 00:16:27.924 inet 192.168.100.8/24 scope global mlx_0_0 00:16:27.924 valid_lft forever preferred_lft forever 00:16:27.924 22:02:38 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:27.924 22:02:38 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:27.924 22:02:38 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:27.924 22:02:38 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:27.924 22:02:38 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:27.924 22:02:38 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:27.924 22:02:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:27.924 22:02:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:27.924 22:02:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:27.924 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:27.924 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:27.924 altname enp217s0f1np1 00:16:27.924 altname ens818f1np1 00:16:27.924 inet 192.168.100.9/24 scope global mlx_0_1 00:16:27.924 valid_lft forever preferred_lft forever 00:16:27.924 22:02:39 -- nvmf/common.sh@410 -- # return 0 00:16:27.924 22:02:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.924 22:02:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:27.924 22:02:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:27.924 22:02:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:27.924 22:02:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:27.924 22:02:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:27.924 22:02:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:27.924 22:02:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:27.924 22:02:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:27.924 22:02:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:27.924 22:02:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:27.924 22:02:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:27.924 22:02:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:27.924 22:02:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:27.924 22:02:39 -- nvmf/common.sh@104 -- # continue 2 00:16:27.924 22:02:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:27.924 22:02:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:27.924 22:02:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:27.924 22:02:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:27.924 22:02:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:27.924 22:02:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:27.924 22:02:39 -- nvmf/common.sh@104 -- # continue 2 00:16:27.924 22:02:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:27.924 22:02:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:27.924 22:02:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:27.924 22:02:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:27.924 22:02:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:27.924 22:02:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:27.924 22:02:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:27.924 22:02:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:27.924 22:02:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:27.924 22:02:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:27.924 22:02:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:27.924 22:02:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:27.924 22:02:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:27.924 192.168.100.9' 00:16:27.924 22:02:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:27.924 192.168.100.9' 00:16:27.924 22:02:39 -- nvmf/common.sh@445 -- # head -n 1 00:16:27.924 22:02:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:27.924 22:02:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:27.924 192.168.100.9' 00:16:27.924 22:02:39 -- nvmf/common.sh@446 -- # tail -n +2 00:16:27.924 22:02:39 -- nvmf/common.sh@446 -- # head -n 1 00:16:27.924 22:02:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:27.924 22:02:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:27.924 22:02:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:27.924 22:02:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:27.924 22:02:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:27.924 22:02:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:27.924 22:02:39 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:27.924 22:02:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.924 22:02:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:27.925 22:02:39 -- common/autotest_common.sh@10 -- # set +x 00:16:27.925 22:02:39 -- nvmf/common.sh@469 -- # nvmfpid=2149492 00:16:27.925 22:02:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:27.925 22:02:39 -- nvmf/common.sh@470 -- # waitforlisten 2149492 00:16:27.925 22:02:39 -- common/autotest_common.sh@819 -- # '[' -z 2149492 ']' 00:16:27.925 22:02:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.925 22:02:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.925 22:02:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.925 22:02:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.925 22:02:39 -- common/autotest_common.sh@10 -- # set +x 00:16:28.185 [2024-07-26 22:02:39.172429] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:28.185 [2024-07-26 22:02:39.172477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.185 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.185 [2024-07-26 22:02:39.255128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.185 [2024-07-26 22:02:39.292729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:28.185 [2024-07-26 22:02:39.292833] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.185 [2024-07-26 22:02:39.292842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.185 [2024-07-26 22:02:39.292851] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.185 [2024-07-26 22:02:39.292948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.185 [2024-07-26 22:02:39.293031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:28.185 [2024-07-26 22:02:39.293032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.754 22:02:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:28.754 22:02:39 -- common/autotest_common.sh@852 -- # return 0 00:16:28.754 22:02:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:28.754 22:02:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:28.754 22:02:39 -- common/autotest_common.sh@10 -- # set +x 00:16:29.013 22:02:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.013 22:02:40 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:29.013 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.013 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.013 [2024-07-26 22:02:40.050351] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24c6cd0/0x24cb1c0) succeed. 00:16:29.013 [2024-07-26 22:02:40.060721] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24c8220/0x250c850) succeed. 00:16:29.013 22:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.013 22:02:40 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:29.013 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.013 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.013 22:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.013 22:02:40 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:29.013 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.013 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.013 [2024-07-26 22:02:40.178780] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:29.013 22:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.013 22:02:40 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:29.013 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.013 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.013 NULL1 00:16:29.013 22:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.013 22:02:40 -- target/connect_stress.sh@21 -- # PERF_PID=2149641 00:16:29.014 22:02:40 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:29.014 22:02:40 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:29.014 22:02:40 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.014 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.014 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:29.272 22:02:40 -- target/connect_stress.sh@28 -- # cat 00:16:29.272 22:02:40 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:29.272 22:02:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.272 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.272 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.530 22:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.530 22:02:40 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:29.530 22:02:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.530 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.530 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:29.788 22:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.788 22:02:40 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:29.788 22:02:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:29.788 22:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.788 22:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:30.354 22:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.354 22:02:41 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:30.354 22:02:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.354 22:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.354 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:16:30.612 22:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.612 22:02:41 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:30.612 22:02:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.612 22:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.612 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:16:30.871 22:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.871 22:02:41 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:30.871 22:02:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:30.871 22:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.871 22:02:41 -- common/autotest_common.sh@10 -- # set +x 00:16:31.130 22:02:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.130 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:31.130 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.130 22:02:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.130 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:16:31.389 22:02:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.389 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:31.389 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.389 22:02:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.389 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:16:31.955 22:02:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.955 22:02:42 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:31.955 22:02:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:31.955 22:02:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.955 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:16:32.213 22:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.213 22:02:43 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:32.213 22:02:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.213 22:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.213 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:16:32.471 22:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.471 22:02:43 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:32.471 22:02:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.471 22:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.471 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:16:32.730 22:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.730 22:02:43 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:32.730 22:02:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.730 22:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.730 22:02:43 -- common/autotest_common.sh@10 -- # set +x 00:16:32.988 22:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.988 22:02:44 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:32.988 22:02:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:32.988 22:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.988 22:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:33.553 22:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.553 22:02:44 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:33.553 22:02:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.553 22:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.553 22:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:33.811 22:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.811 22:02:44 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:33.811 22:02:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:33.811 22:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.811 22:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:34.068 22:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.068 22:02:45 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:34.068 22:02:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.068 22:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.068 22:02:45 -- common/autotest_common.sh@10 -- # set +x 00:16:34.326 22:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.326 22:02:45 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:34.326 22:02:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.326 22:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.326 22:02:45 -- common/autotest_common.sh@10 -- # set +x 00:16:34.893 22:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.893 22:02:45 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:34.893 22:02:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:34.893 22:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.893 22:02:45 -- common/autotest_common.sh@10 -- # set +x 00:16:35.151 22:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.151 22:02:46 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:35.151 22:02:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.151 22:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.151 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:35.445 22:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.445 22:02:46 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:35.445 22:02:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.445 22:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.445 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:35.704 22:02:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.704 22:02:46 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:35.704 22:02:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.704 22:02:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.704 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:16:35.963 22:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.963 22:02:47 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:35.963 22:02:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:35.963 22:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.963 22:02:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.531 22:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.531 22:02:47 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:36.531 22:02:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.531 22:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.531 22:02:47 -- common/autotest_common.sh@10 -- # set +x 00:16:36.791 22:02:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.791 22:02:47 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:36.791 22:02:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:36.791 22:02:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.791 22:02:47 -- common/autotest_common.sh@10 -- # set +x 00:16:37.050 22:02:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:37.050 22:02:48 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:37.050 22:02:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.050 22:02:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.050 22:02:48 -- common/autotest_common.sh@10 -- # set +x 00:16:37.309 22:02:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:37.309 22:02:48 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:37.309 22:02:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.309 22:02:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.309 22:02:48 -- common/autotest_common.sh@10 -- # set +x 00:16:37.568 22:02:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:37.568 22:02:48 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:37.568 22:02:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:37.568 22:02:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.568 22:02:48 -- common/autotest_common.sh@10 -- # set +x 00:16:38.136 22:02:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.137 22:02:49 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:38.137 22:02:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.137 22:02:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.137 22:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.395 22:02:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.395 22:02:49 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:38.395 22:02:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.395 22:02:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.395 22:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.654 22:02:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.655 22:02:49 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:38.655 22:02:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.655 22:02:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.655 22:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:38.914 22:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.914 22:02:50 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:38.914 22:02:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:38.914 22:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.914 22:02:50 -- common/autotest_common.sh@10 -- # set +x 00:16:39.173 22:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.173 22:02:50 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:39.173 22:02:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:39.173 22:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.173 22:02:50 -- common/autotest_common.sh@10 -- # set +x 00:16:39.431 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:39.690 22:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.690 22:02:50 -- target/connect_stress.sh@34 -- # kill -0 2149641 00:16:39.690 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2149641) - No such process 00:16:39.690 22:02:50 -- target/connect_stress.sh@38 -- # wait 2149641 00:16:39.690 22:02:50 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:39.690 22:02:50 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:39.690 22:02:50 -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:39.690 22:02:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:39.690 22:02:50 -- nvmf/common.sh@116 -- # sync 00:16:39.690 22:02:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:39.690 22:02:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:39.690 22:02:50 -- nvmf/common.sh@119 -- # set +e 00:16:39.690 22:02:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:39.690 22:02:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:39.690 rmmod nvme_rdma 00:16:39.690 rmmod nvme_fabrics 00:16:39.690 22:02:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:39.690 22:02:50 -- nvmf/common.sh@123 -- # set -e 00:16:39.690 22:02:50 -- nvmf/common.sh@124 -- # return 0 00:16:39.690 22:02:50 -- nvmf/common.sh@477 -- # '[' -n 2149492 ']' 00:16:39.690 22:02:50 -- nvmf/common.sh@478 -- # killprocess 2149492 00:16:39.690 22:02:50 -- common/autotest_common.sh@926 -- # '[' -z 2149492 ']' 00:16:39.690 22:02:50 -- common/autotest_common.sh@930 -- # kill -0 2149492 00:16:39.690 22:02:50 -- common/autotest_common.sh@931 -- # uname 00:16:39.690 22:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:39.690 22:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2149492 00:16:39.690 22:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:39.690 22:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:39.690 22:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2149492' 00:16:39.690 killing process with pid 2149492 00:16:39.690 22:02:50 -- common/autotest_common.sh@945 -- # kill 2149492 00:16:39.690 22:02:50 -- common/autotest_common.sh@950 -- # wait 2149492 00:16:39.950 22:02:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:39.950 22:02:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:39.950 00:16:39.950 real 0m20.392s 00:16:39.950 user 0m43.021s 00:16:39.950 sys 0m8.956s 00:16:39.950 22:02:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.950 22:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:39.950 ************************************ 00:16:39.950 END TEST nvmf_connect_stress 00:16:39.950 ************************************ 00:16:39.950 22:02:51 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:39.950 22:02:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.950 22:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.950 22:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:39.950 ************************************ 00:16:39.950 START TEST nvmf_fused_ordering 00:16:39.950 ************************************ 00:16:39.950 22:02:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:16:40.210 * Looking for test storage... 00:16:40.210 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:40.210 22:02:51 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.210 22:02:51 -- nvmf/common.sh@7 -- # uname -s 00:16:40.210 22:02:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.210 22:02:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.210 22:02:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.210 22:02:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.210 22:02:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.210 22:02:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.210 22:02:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.210 22:02:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.210 22:02:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.210 22:02:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.210 22:02:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:40.210 22:02:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:40.210 22:02:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.210 22:02:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.210 22:02:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.210 22:02:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:40.210 22:02:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.210 22:02:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.210 22:02:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.210 22:02:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.210 22:02:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.210 22:02:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.210 22:02:51 -- paths/export.sh@5 -- # export PATH 00:16:40.210 22:02:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.210 22:02:51 -- nvmf/common.sh@46 -- # : 0 00:16:40.210 22:02:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:40.210 22:02:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:40.210 22:02:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:40.210 22:02:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.210 22:02:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.210 22:02:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:40.210 22:02:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:40.210 22:02:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:40.210 22:02:51 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:40.210 22:02:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:40.210 22:02:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.210 22:02:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:40.210 22:02:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:40.210 22:02:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:40.210 22:02:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.210 22:02:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.210 22:02:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.210 22:02:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:40.210 22:02:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:40.210 22:02:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:40.210 22:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:50.196 22:02:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:50.196 22:02:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:50.196 22:02:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:50.196 22:02:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:50.196 22:02:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:50.196 22:02:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:50.196 22:02:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:50.196 22:02:59 -- nvmf/common.sh@294 -- # net_devs=() 00:16:50.196 22:02:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:50.196 22:02:59 -- nvmf/common.sh@295 -- # e810=() 00:16:50.196 22:02:59 -- nvmf/common.sh@295 -- # local -ga e810 00:16:50.196 22:02:59 -- nvmf/common.sh@296 -- # x722=() 00:16:50.196 22:02:59 -- nvmf/common.sh@296 -- # local -ga x722 00:16:50.196 22:02:59 -- nvmf/common.sh@297 -- # mlx=() 00:16:50.196 22:02:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:50.196 22:02:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.196 22:02:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:50.196 22:02:59 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:50.196 22:02:59 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:50.196 22:02:59 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:50.196 22:02:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:50.196 22:02:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:50.196 22:02:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:50.196 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:50.196 22:02:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:50.196 22:02:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:50.196 22:02:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:50.196 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:50.196 22:02:59 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:50.196 22:02:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:50.196 22:02:59 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:50.196 22:02:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.196 22:02:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:50.196 22:02:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.196 22:02:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:50.196 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:50.196 22:02:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.196 22:02:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:50.196 22:02:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.196 22:02:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:50.196 22:02:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.196 22:02:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:50.196 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:50.196 22:02:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.196 22:02:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:50.196 22:02:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:50.196 22:02:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:50.196 22:02:59 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:50.196 22:02:59 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:50.196 22:02:59 -- nvmf/common.sh@57 -- # uname 00:16:50.196 22:02:59 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:50.196 22:02:59 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:50.196 22:02:59 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:50.196 22:02:59 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:50.196 22:02:59 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:50.196 22:02:59 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:50.197 22:02:59 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:50.197 22:02:59 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:50.197 22:02:59 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:50.197 22:02:59 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:50.197 22:02:59 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:50.197 22:02:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:50.197 22:02:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:50.197 22:02:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:50.197 22:02:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:50.197 22:02:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:50.197 22:02:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@104 -- # continue 2 00:16:50.197 22:02:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@104 -- # continue 2 00:16:50.197 22:02:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:50.197 22:02:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:50.197 22:02:59 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:50.197 22:02:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:50.197 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:50.197 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:50.197 altname enp217s0f0np0 00:16:50.197 altname ens818f0np0 00:16:50.197 inet 192.168.100.8/24 scope global mlx_0_0 00:16:50.197 valid_lft forever preferred_lft forever 00:16:50.197 22:02:59 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:50.197 22:02:59 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:50.197 22:02:59 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:50.197 22:02:59 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:50.197 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:50.197 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:50.197 altname enp217s0f1np1 00:16:50.197 altname ens818f1np1 00:16:50.197 inet 192.168.100.9/24 scope global mlx_0_1 00:16:50.197 valid_lft forever preferred_lft forever 00:16:50.197 22:02:59 -- nvmf/common.sh@410 -- # return 0 00:16:50.197 22:02:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:50.197 22:02:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:50.197 22:02:59 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:50.197 22:02:59 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:50.197 22:02:59 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:50.197 22:02:59 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:50.197 22:02:59 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:50.197 22:02:59 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:50.197 22:02:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:50.197 22:02:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@104 -- # continue 2 00:16:50.197 22:02:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:50.197 22:02:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:50.197 22:02:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@104 -- # continue 2 00:16:50.197 22:02:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:50.197 22:02:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:50.197 22:02:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:50.197 22:02:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:50.197 22:02:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:50.197 22:02:59 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:50.197 192.168.100.9' 00:16:50.197 22:02:59 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:50.197 192.168.100.9' 00:16:50.197 22:02:59 -- nvmf/common.sh@445 -- # head -n 1 00:16:50.197 22:02:59 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:50.197 22:02:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:50.197 192.168.100.9' 00:16:50.197 22:02:59 -- nvmf/common.sh@446 -- # head -n 1 00:16:50.197 22:02:59 -- nvmf/common.sh@446 -- # tail -n +2 00:16:50.197 22:02:59 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:50.197 22:02:59 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:50.197 22:02:59 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:50.197 22:02:59 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:50.197 22:02:59 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:50.197 22:02:59 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:50.197 22:02:59 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:50.197 22:02:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:50.197 22:02:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:50.197 22:02:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.197 22:02:59 -- nvmf/common.sh@469 -- # nvmfpid=2155608 00:16:50.197 22:02:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:50.197 22:02:59 -- nvmf/common.sh@470 -- # waitforlisten 2155608 00:16:50.197 22:02:59 -- common/autotest_common.sh@819 -- # '[' -z 2155608 ']' 00:16:50.197 22:02:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.197 22:02:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:50.197 22:02:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.197 22:02:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:50.197 22:02:59 -- common/autotest_common.sh@10 -- # set +x 00:16:50.197 [2024-07-26 22:02:59.957333] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:50.197 [2024-07-26 22:02:59.957390] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.197 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.197 [2024-07-26 22:03:00.046613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.197 [2024-07-26 22:03:00.085652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:50.197 [2024-07-26 22:03:00.085782] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.197 [2024-07-26 22:03:00.085792] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.197 [2024-07-26 22:03:00.085802] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.197 [2024-07-26 22:03:00.085830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:50.197 22:03:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:50.197 22:03:00 -- common/autotest_common.sh@852 -- # return 0 00:16:50.197 22:03:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:50.197 22:03:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:50.197 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.197 22:03:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.197 22:03:00 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:50.197 22:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.197 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.197 [2024-07-26 22:03:00.828827] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc4c620/0xc50b10) succeed. 00:16:50.197 [2024-07-26 22:03:00.837909] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc4db20/0xc921a0) succeed. 00:16:50.197 22:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.197 22:03:00 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:50.197 22:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.197 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.197 22:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.197 22:03:00 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:50.197 22:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.197 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.197 [2024-07-26 22:03:00.901553] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:50.198 22:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.198 22:03:00 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:50.198 22:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.198 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.198 NULL1 00:16:50.198 22:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.198 22:03:00 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:50.198 22:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.198 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.198 22:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.198 22:03:00 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:50.198 22:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.198 22:03:00 -- common/autotest_common.sh@10 -- # set +x 00:16:50.198 22:03:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.198 22:03:00 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:50.198 [2024-07-26 22:03:00.954904] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:50.198 [2024-07-26 22:03:00.954941] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155894 ] 00:16:50.198 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.198 Attached to nqn.2016-06.io.spdk:cnode1 00:16:50.198 Namespace ID: 1 size: 1GB 00:16:50.198 fused_ordering(0) 00:16:50.198 fused_ordering(1) 00:16:50.198 fused_ordering(2) 00:16:50.198 fused_ordering(3) 00:16:50.198 fused_ordering(4) 00:16:50.198 fused_ordering(5) 00:16:50.198 fused_ordering(6) 00:16:50.198 fused_ordering(7) 00:16:50.198 fused_ordering(8) 00:16:50.198 fused_ordering(9) 00:16:50.198 fused_ordering(10) 00:16:50.198 fused_ordering(11) 00:16:50.198 fused_ordering(12) 00:16:50.198 fused_ordering(13) 00:16:50.198 fused_ordering(14) 00:16:50.198 fused_ordering(15) 00:16:50.198 fused_ordering(16) 00:16:50.198 fused_ordering(17) 00:16:50.198 fused_ordering(18) 00:16:50.198 fused_ordering(19) 00:16:50.198 fused_ordering(20) 00:16:50.198 fused_ordering(21) 00:16:50.198 fused_ordering(22) 00:16:50.198 fused_ordering(23) 00:16:50.198 fused_ordering(24) 00:16:50.198 fused_ordering(25) 00:16:50.198 fused_ordering(26) 00:16:50.198 fused_ordering(27) 00:16:50.198 fused_ordering(28) 00:16:50.198 fused_ordering(29) 00:16:50.198 fused_ordering(30) 00:16:50.198 fused_ordering(31) 00:16:50.198 fused_ordering(32) 00:16:50.198 fused_ordering(33) 00:16:50.198 fused_ordering(34) 00:16:50.198 fused_ordering(35) 00:16:50.198 fused_ordering(36) 00:16:50.198 fused_ordering(37) 00:16:50.198 fused_ordering(38) 00:16:50.198 fused_ordering(39) 00:16:50.198 fused_ordering(40) 00:16:50.198 fused_ordering(41) 00:16:50.198 fused_ordering(42) 00:16:50.198 fused_ordering(43) 00:16:50.198 fused_ordering(44) 00:16:50.198 fused_ordering(45) 00:16:50.198 fused_ordering(46) 00:16:50.198 fused_ordering(47) 00:16:50.198 fused_ordering(48) 00:16:50.198 fused_ordering(49) 00:16:50.198 fused_ordering(50) 00:16:50.198 fused_ordering(51) 00:16:50.198 fused_ordering(52) 00:16:50.198 fused_ordering(53) 00:16:50.198 fused_ordering(54) 00:16:50.198 fused_ordering(55) 00:16:50.198 fused_ordering(56) 00:16:50.198 fused_ordering(57) 00:16:50.198 fused_ordering(58) 00:16:50.198 fused_ordering(59) 00:16:50.198 fused_ordering(60) 00:16:50.198 fused_ordering(61) 00:16:50.198 fused_ordering(62) 00:16:50.198 fused_ordering(63) 00:16:50.198 fused_ordering(64) 00:16:50.198 fused_ordering(65) 00:16:50.198 fused_ordering(66) 00:16:50.198 fused_ordering(67) 00:16:50.198 fused_ordering(68) 00:16:50.198 fused_ordering(69) 00:16:50.198 fused_ordering(70) 00:16:50.198 fused_ordering(71) 00:16:50.198 fused_ordering(72) 00:16:50.198 fused_ordering(73) 00:16:50.198 fused_ordering(74) 00:16:50.198 fused_ordering(75) 00:16:50.198 fused_ordering(76) 00:16:50.198 fused_ordering(77) 00:16:50.198 fused_ordering(78) 00:16:50.198 fused_ordering(79) 00:16:50.198 fused_ordering(80) 00:16:50.198 fused_ordering(81) 00:16:50.198 fused_ordering(82) 00:16:50.198 fused_ordering(83) 00:16:50.198 fused_ordering(84) 00:16:50.198 fused_ordering(85) 00:16:50.198 fused_ordering(86) 00:16:50.198 fused_ordering(87) 00:16:50.198 fused_ordering(88) 00:16:50.198 fused_ordering(89) 00:16:50.198 fused_ordering(90) 00:16:50.198 fused_ordering(91) 00:16:50.198 fused_ordering(92) 00:16:50.198 fused_ordering(93) 00:16:50.198 fused_ordering(94) 00:16:50.198 fused_ordering(95) 00:16:50.198 fused_ordering(96) 00:16:50.198 fused_ordering(97) 00:16:50.198 fused_ordering(98) 00:16:50.198 fused_ordering(99) 00:16:50.198 fused_ordering(100) 00:16:50.198 fused_ordering(101) 00:16:50.198 fused_ordering(102) 00:16:50.198 fused_ordering(103) 00:16:50.198 fused_ordering(104) 00:16:50.198 fused_ordering(105) 00:16:50.198 fused_ordering(106) 00:16:50.198 fused_ordering(107) 00:16:50.198 fused_ordering(108) 00:16:50.198 fused_ordering(109) 00:16:50.198 fused_ordering(110) 00:16:50.198 fused_ordering(111) 00:16:50.198 fused_ordering(112) 00:16:50.198 fused_ordering(113) 00:16:50.198 fused_ordering(114) 00:16:50.198 fused_ordering(115) 00:16:50.198 fused_ordering(116) 00:16:50.198 fused_ordering(117) 00:16:50.198 fused_ordering(118) 00:16:50.198 fused_ordering(119) 00:16:50.198 fused_ordering(120) 00:16:50.198 fused_ordering(121) 00:16:50.198 fused_ordering(122) 00:16:50.198 fused_ordering(123) 00:16:50.198 fused_ordering(124) 00:16:50.198 fused_ordering(125) 00:16:50.198 fused_ordering(126) 00:16:50.198 fused_ordering(127) 00:16:50.198 fused_ordering(128) 00:16:50.198 fused_ordering(129) 00:16:50.198 fused_ordering(130) 00:16:50.198 fused_ordering(131) 00:16:50.198 fused_ordering(132) 00:16:50.198 fused_ordering(133) 00:16:50.198 fused_ordering(134) 00:16:50.198 fused_ordering(135) 00:16:50.198 fused_ordering(136) 00:16:50.198 fused_ordering(137) 00:16:50.198 fused_ordering(138) 00:16:50.198 fused_ordering(139) 00:16:50.198 fused_ordering(140) 00:16:50.198 fused_ordering(141) 00:16:50.198 fused_ordering(142) 00:16:50.198 fused_ordering(143) 00:16:50.198 fused_ordering(144) 00:16:50.198 fused_ordering(145) 00:16:50.198 fused_ordering(146) 00:16:50.198 fused_ordering(147) 00:16:50.198 fused_ordering(148) 00:16:50.198 fused_ordering(149) 00:16:50.198 fused_ordering(150) 00:16:50.198 fused_ordering(151) 00:16:50.198 fused_ordering(152) 00:16:50.198 fused_ordering(153) 00:16:50.198 fused_ordering(154) 00:16:50.198 fused_ordering(155) 00:16:50.198 fused_ordering(156) 00:16:50.198 fused_ordering(157) 00:16:50.198 fused_ordering(158) 00:16:50.198 fused_ordering(159) 00:16:50.198 fused_ordering(160) 00:16:50.198 fused_ordering(161) 00:16:50.198 fused_ordering(162) 00:16:50.198 fused_ordering(163) 00:16:50.198 fused_ordering(164) 00:16:50.198 fused_ordering(165) 00:16:50.198 fused_ordering(166) 00:16:50.198 fused_ordering(167) 00:16:50.198 fused_ordering(168) 00:16:50.198 fused_ordering(169) 00:16:50.198 fused_ordering(170) 00:16:50.198 fused_ordering(171) 00:16:50.198 fused_ordering(172) 00:16:50.198 fused_ordering(173) 00:16:50.198 fused_ordering(174) 00:16:50.198 fused_ordering(175) 00:16:50.198 fused_ordering(176) 00:16:50.198 fused_ordering(177) 00:16:50.198 fused_ordering(178) 00:16:50.198 fused_ordering(179) 00:16:50.198 fused_ordering(180) 00:16:50.198 fused_ordering(181) 00:16:50.198 fused_ordering(182) 00:16:50.198 fused_ordering(183) 00:16:50.198 fused_ordering(184) 00:16:50.198 fused_ordering(185) 00:16:50.198 fused_ordering(186) 00:16:50.198 fused_ordering(187) 00:16:50.198 fused_ordering(188) 00:16:50.198 fused_ordering(189) 00:16:50.198 fused_ordering(190) 00:16:50.198 fused_ordering(191) 00:16:50.198 fused_ordering(192) 00:16:50.198 fused_ordering(193) 00:16:50.198 fused_ordering(194) 00:16:50.198 fused_ordering(195) 00:16:50.198 fused_ordering(196) 00:16:50.198 fused_ordering(197) 00:16:50.198 fused_ordering(198) 00:16:50.198 fused_ordering(199) 00:16:50.198 fused_ordering(200) 00:16:50.198 fused_ordering(201) 00:16:50.198 fused_ordering(202) 00:16:50.198 fused_ordering(203) 00:16:50.198 fused_ordering(204) 00:16:50.198 fused_ordering(205) 00:16:50.198 fused_ordering(206) 00:16:50.198 fused_ordering(207) 00:16:50.198 fused_ordering(208) 00:16:50.198 fused_ordering(209) 00:16:50.198 fused_ordering(210) 00:16:50.198 fused_ordering(211) 00:16:50.198 fused_ordering(212) 00:16:50.198 fused_ordering(213) 00:16:50.198 fused_ordering(214) 00:16:50.198 fused_ordering(215) 00:16:50.198 fused_ordering(216) 00:16:50.198 fused_ordering(217) 00:16:50.198 fused_ordering(218) 00:16:50.198 fused_ordering(219) 00:16:50.198 fused_ordering(220) 00:16:50.198 fused_ordering(221) 00:16:50.198 fused_ordering(222) 00:16:50.198 fused_ordering(223) 00:16:50.198 fused_ordering(224) 00:16:50.198 fused_ordering(225) 00:16:50.199 fused_ordering(226) 00:16:50.199 fused_ordering(227) 00:16:50.199 fused_ordering(228) 00:16:50.199 fused_ordering(229) 00:16:50.199 fused_ordering(230) 00:16:50.199 fused_ordering(231) 00:16:50.199 fused_ordering(232) 00:16:50.199 fused_ordering(233) 00:16:50.199 fused_ordering(234) 00:16:50.199 fused_ordering(235) 00:16:50.199 fused_ordering(236) 00:16:50.199 fused_ordering(237) 00:16:50.199 fused_ordering(238) 00:16:50.199 fused_ordering(239) 00:16:50.199 fused_ordering(240) 00:16:50.199 fused_ordering(241) 00:16:50.199 fused_ordering(242) 00:16:50.199 fused_ordering(243) 00:16:50.199 fused_ordering(244) 00:16:50.199 fused_ordering(245) 00:16:50.199 fused_ordering(246) 00:16:50.199 fused_ordering(247) 00:16:50.199 fused_ordering(248) 00:16:50.199 fused_ordering(249) 00:16:50.199 fused_ordering(250) 00:16:50.199 fused_ordering(251) 00:16:50.199 fused_ordering(252) 00:16:50.199 fused_ordering(253) 00:16:50.199 fused_ordering(254) 00:16:50.199 fused_ordering(255) 00:16:50.199 fused_ordering(256) 00:16:50.199 fused_ordering(257) 00:16:50.199 fused_ordering(258) 00:16:50.199 fused_ordering(259) 00:16:50.199 fused_ordering(260) 00:16:50.199 fused_ordering(261) 00:16:50.199 fused_ordering(262) 00:16:50.199 fused_ordering(263) 00:16:50.199 fused_ordering(264) 00:16:50.199 fused_ordering(265) 00:16:50.199 fused_ordering(266) 00:16:50.199 fused_ordering(267) 00:16:50.199 fused_ordering(268) 00:16:50.199 fused_ordering(269) 00:16:50.199 fused_ordering(270) 00:16:50.199 fused_ordering(271) 00:16:50.199 fused_ordering(272) 00:16:50.199 fused_ordering(273) 00:16:50.199 fused_ordering(274) 00:16:50.199 fused_ordering(275) 00:16:50.199 fused_ordering(276) 00:16:50.199 fused_ordering(277) 00:16:50.199 fused_ordering(278) 00:16:50.199 fused_ordering(279) 00:16:50.199 fused_ordering(280) 00:16:50.199 fused_ordering(281) 00:16:50.199 fused_ordering(282) 00:16:50.199 fused_ordering(283) 00:16:50.199 fused_ordering(284) 00:16:50.199 fused_ordering(285) 00:16:50.199 fused_ordering(286) 00:16:50.199 fused_ordering(287) 00:16:50.199 fused_ordering(288) 00:16:50.199 fused_ordering(289) 00:16:50.199 fused_ordering(290) 00:16:50.199 fused_ordering(291) 00:16:50.199 fused_ordering(292) 00:16:50.199 fused_ordering(293) 00:16:50.199 fused_ordering(294) 00:16:50.199 fused_ordering(295) 00:16:50.199 fused_ordering(296) 00:16:50.199 fused_ordering(297) 00:16:50.199 fused_ordering(298) 00:16:50.199 fused_ordering(299) 00:16:50.199 fused_ordering(300) 00:16:50.199 fused_ordering(301) 00:16:50.199 fused_ordering(302) 00:16:50.199 fused_ordering(303) 00:16:50.199 fused_ordering(304) 00:16:50.199 fused_ordering(305) 00:16:50.199 fused_ordering(306) 00:16:50.199 fused_ordering(307) 00:16:50.199 fused_ordering(308) 00:16:50.199 fused_ordering(309) 00:16:50.199 fused_ordering(310) 00:16:50.199 fused_ordering(311) 00:16:50.199 fused_ordering(312) 00:16:50.199 fused_ordering(313) 00:16:50.199 fused_ordering(314) 00:16:50.199 fused_ordering(315) 00:16:50.199 fused_ordering(316) 00:16:50.199 fused_ordering(317) 00:16:50.199 fused_ordering(318) 00:16:50.199 fused_ordering(319) 00:16:50.199 fused_ordering(320) 00:16:50.199 fused_ordering(321) 00:16:50.199 fused_ordering(322) 00:16:50.199 fused_ordering(323) 00:16:50.199 fused_ordering(324) 00:16:50.199 fused_ordering(325) 00:16:50.199 fused_ordering(326) 00:16:50.199 fused_ordering(327) 00:16:50.199 fused_ordering(328) 00:16:50.199 fused_ordering(329) 00:16:50.199 fused_ordering(330) 00:16:50.199 fused_ordering(331) 00:16:50.199 fused_ordering(332) 00:16:50.199 fused_ordering(333) 00:16:50.199 fused_ordering(334) 00:16:50.199 fused_ordering(335) 00:16:50.199 fused_ordering(336) 00:16:50.199 fused_ordering(337) 00:16:50.199 fused_ordering(338) 00:16:50.199 fused_ordering(339) 00:16:50.199 fused_ordering(340) 00:16:50.199 fused_ordering(341) 00:16:50.199 fused_ordering(342) 00:16:50.199 fused_ordering(343) 00:16:50.199 fused_ordering(344) 00:16:50.199 fused_ordering(345) 00:16:50.199 fused_ordering(346) 00:16:50.199 fused_ordering(347) 00:16:50.199 fused_ordering(348) 00:16:50.199 fused_ordering(349) 00:16:50.199 fused_ordering(350) 00:16:50.199 fused_ordering(351) 00:16:50.199 fused_ordering(352) 00:16:50.199 fused_ordering(353) 00:16:50.199 fused_ordering(354) 00:16:50.199 fused_ordering(355) 00:16:50.199 fused_ordering(356) 00:16:50.199 fused_ordering(357) 00:16:50.199 fused_ordering(358) 00:16:50.199 fused_ordering(359) 00:16:50.199 fused_ordering(360) 00:16:50.199 fused_ordering(361) 00:16:50.199 fused_ordering(362) 00:16:50.199 fused_ordering(363) 00:16:50.199 fused_ordering(364) 00:16:50.199 fused_ordering(365) 00:16:50.199 fused_ordering(366) 00:16:50.199 fused_ordering(367) 00:16:50.199 fused_ordering(368) 00:16:50.199 fused_ordering(369) 00:16:50.199 fused_ordering(370) 00:16:50.199 fused_ordering(371) 00:16:50.199 fused_ordering(372) 00:16:50.199 fused_ordering(373) 00:16:50.199 fused_ordering(374) 00:16:50.199 fused_ordering(375) 00:16:50.199 fused_ordering(376) 00:16:50.199 fused_ordering(377) 00:16:50.199 fused_ordering(378) 00:16:50.199 fused_ordering(379) 00:16:50.199 fused_ordering(380) 00:16:50.199 fused_ordering(381) 00:16:50.199 fused_ordering(382) 00:16:50.199 fused_ordering(383) 00:16:50.199 fused_ordering(384) 00:16:50.199 fused_ordering(385) 00:16:50.199 fused_ordering(386) 00:16:50.199 fused_ordering(387) 00:16:50.199 fused_ordering(388) 00:16:50.199 fused_ordering(389) 00:16:50.199 fused_ordering(390) 00:16:50.199 fused_ordering(391) 00:16:50.199 fused_ordering(392) 00:16:50.199 fused_ordering(393) 00:16:50.199 fused_ordering(394) 00:16:50.199 fused_ordering(395) 00:16:50.199 fused_ordering(396) 00:16:50.199 fused_ordering(397) 00:16:50.199 fused_ordering(398) 00:16:50.199 fused_ordering(399) 00:16:50.199 fused_ordering(400) 00:16:50.199 fused_ordering(401) 00:16:50.199 fused_ordering(402) 00:16:50.199 fused_ordering(403) 00:16:50.199 fused_ordering(404) 00:16:50.199 fused_ordering(405) 00:16:50.199 fused_ordering(406) 00:16:50.199 fused_ordering(407) 00:16:50.199 fused_ordering(408) 00:16:50.199 fused_ordering(409) 00:16:50.199 fused_ordering(410) 00:16:50.199 fused_ordering(411) 00:16:50.199 fused_ordering(412) 00:16:50.199 fused_ordering(413) 00:16:50.199 fused_ordering(414) 00:16:50.199 fused_ordering(415) 00:16:50.199 fused_ordering(416) 00:16:50.199 fused_ordering(417) 00:16:50.199 fused_ordering(418) 00:16:50.199 fused_ordering(419) 00:16:50.199 fused_ordering(420) 00:16:50.199 fused_ordering(421) 00:16:50.199 fused_ordering(422) 00:16:50.199 fused_ordering(423) 00:16:50.199 fused_ordering(424) 00:16:50.199 fused_ordering(425) 00:16:50.199 fused_ordering(426) 00:16:50.199 fused_ordering(427) 00:16:50.199 fused_ordering(428) 00:16:50.199 fused_ordering(429) 00:16:50.199 fused_ordering(430) 00:16:50.199 fused_ordering(431) 00:16:50.199 fused_ordering(432) 00:16:50.199 fused_ordering(433) 00:16:50.199 fused_ordering(434) 00:16:50.199 fused_ordering(435) 00:16:50.199 fused_ordering(436) 00:16:50.199 fused_ordering(437) 00:16:50.199 fused_ordering(438) 00:16:50.199 fused_ordering(439) 00:16:50.199 fused_ordering(440) 00:16:50.199 fused_ordering(441) 00:16:50.199 fused_ordering(442) 00:16:50.199 fused_ordering(443) 00:16:50.199 fused_ordering(444) 00:16:50.199 fused_ordering(445) 00:16:50.199 fused_ordering(446) 00:16:50.199 fused_ordering(447) 00:16:50.199 fused_ordering(448) 00:16:50.199 fused_ordering(449) 00:16:50.199 fused_ordering(450) 00:16:50.199 fused_ordering(451) 00:16:50.199 fused_ordering(452) 00:16:50.199 fused_ordering(453) 00:16:50.199 fused_ordering(454) 00:16:50.199 fused_ordering(455) 00:16:50.199 fused_ordering(456) 00:16:50.199 fused_ordering(457) 00:16:50.199 fused_ordering(458) 00:16:50.199 fused_ordering(459) 00:16:50.199 fused_ordering(460) 00:16:50.199 fused_ordering(461) 00:16:50.199 fused_ordering(462) 00:16:50.199 fused_ordering(463) 00:16:50.199 fused_ordering(464) 00:16:50.199 fused_ordering(465) 00:16:50.199 fused_ordering(466) 00:16:50.199 fused_ordering(467) 00:16:50.199 fused_ordering(468) 00:16:50.199 fused_ordering(469) 00:16:50.199 fused_ordering(470) 00:16:50.199 fused_ordering(471) 00:16:50.199 fused_ordering(472) 00:16:50.199 fused_ordering(473) 00:16:50.199 fused_ordering(474) 00:16:50.199 fused_ordering(475) 00:16:50.199 fused_ordering(476) 00:16:50.199 fused_ordering(477) 00:16:50.199 fused_ordering(478) 00:16:50.199 fused_ordering(479) 00:16:50.199 fused_ordering(480) 00:16:50.199 fused_ordering(481) 00:16:50.199 fused_ordering(482) 00:16:50.199 fused_ordering(483) 00:16:50.199 fused_ordering(484) 00:16:50.199 fused_ordering(485) 00:16:50.199 fused_ordering(486) 00:16:50.199 fused_ordering(487) 00:16:50.199 fused_ordering(488) 00:16:50.199 fused_ordering(489) 00:16:50.199 fused_ordering(490) 00:16:50.199 fused_ordering(491) 00:16:50.199 fused_ordering(492) 00:16:50.199 fused_ordering(493) 00:16:50.200 fused_ordering(494) 00:16:50.200 fused_ordering(495) 00:16:50.200 fused_ordering(496) 00:16:50.200 fused_ordering(497) 00:16:50.200 fused_ordering(498) 00:16:50.200 fused_ordering(499) 00:16:50.200 fused_ordering(500) 00:16:50.200 fused_ordering(501) 00:16:50.200 fused_ordering(502) 00:16:50.200 fused_ordering(503) 00:16:50.200 fused_ordering(504) 00:16:50.200 fused_ordering(505) 00:16:50.200 fused_ordering(506) 00:16:50.200 fused_ordering(507) 00:16:50.200 fused_ordering(508) 00:16:50.200 fused_ordering(509) 00:16:50.200 fused_ordering(510) 00:16:50.200 fused_ordering(511) 00:16:50.200 fused_ordering(512) 00:16:50.200 fused_ordering(513) 00:16:50.200 fused_ordering(514) 00:16:50.200 fused_ordering(515) 00:16:50.200 fused_ordering(516) 00:16:50.200 fused_ordering(517) 00:16:50.200 fused_ordering(518) 00:16:50.200 fused_ordering(519) 00:16:50.200 fused_ordering(520) 00:16:50.200 fused_ordering(521) 00:16:50.200 fused_ordering(522) 00:16:50.200 fused_ordering(523) 00:16:50.200 fused_ordering(524) 00:16:50.200 fused_ordering(525) 00:16:50.200 fused_ordering(526) 00:16:50.200 fused_ordering(527) 00:16:50.200 fused_ordering(528) 00:16:50.200 fused_ordering(529) 00:16:50.200 fused_ordering(530) 00:16:50.200 fused_ordering(531) 00:16:50.200 fused_ordering(532) 00:16:50.200 fused_ordering(533) 00:16:50.200 fused_ordering(534) 00:16:50.200 fused_ordering(535) 00:16:50.200 fused_ordering(536) 00:16:50.200 fused_ordering(537) 00:16:50.200 fused_ordering(538) 00:16:50.200 fused_ordering(539) 00:16:50.200 fused_ordering(540) 00:16:50.200 fused_ordering(541) 00:16:50.200 fused_ordering(542) 00:16:50.200 fused_ordering(543) 00:16:50.200 fused_ordering(544) 00:16:50.200 fused_ordering(545) 00:16:50.200 fused_ordering(546) 00:16:50.200 fused_ordering(547) 00:16:50.200 fused_ordering(548) 00:16:50.200 fused_ordering(549) 00:16:50.200 fused_ordering(550) 00:16:50.200 fused_ordering(551) 00:16:50.200 fused_ordering(552) 00:16:50.200 fused_ordering(553) 00:16:50.200 fused_ordering(554) 00:16:50.200 fused_ordering(555) 00:16:50.200 fused_ordering(556) 00:16:50.200 fused_ordering(557) 00:16:50.200 fused_ordering(558) 00:16:50.200 fused_ordering(559) 00:16:50.200 fused_ordering(560) 00:16:50.200 fused_ordering(561) 00:16:50.200 fused_ordering(562) 00:16:50.200 fused_ordering(563) 00:16:50.200 fused_ordering(564) 00:16:50.200 fused_ordering(565) 00:16:50.200 fused_ordering(566) 00:16:50.200 fused_ordering(567) 00:16:50.200 fused_ordering(568) 00:16:50.200 fused_ordering(569) 00:16:50.200 fused_ordering(570) 00:16:50.200 fused_ordering(571) 00:16:50.200 fused_ordering(572) 00:16:50.200 fused_ordering(573) 00:16:50.200 fused_ordering(574) 00:16:50.200 fused_ordering(575) 00:16:50.200 fused_ordering(576) 00:16:50.200 fused_ordering(577) 00:16:50.200 fused_ordering(578) 00:16:50.200 fused_ordering(579) 00:16:50.200 fused_ordering(580) 00:16:50.200 fused_ordering(581) 00:16:50.200 fused_ordering(582) 00:16:50.200 fused_ordering(583) 00:16:50.200 fused_ordering(584) 00:16:50.200 fused_ordering(585) 00:16:50.200 fused_ordering(586) 00:16:50.200 fused_ordering(587) 00:16:50.200 fused_ordering(588) 00:16:50.200 fused_ordering(589) 00:16:50.200 fused_ordering(590) 00:16:50.200 fused_ordering(591) 00:16:50.200 fused_ordering(592) 00:16:50.200 fused_ordering(593) 00:16:50.200 fused_ordering(594) 00:16:50.200 fused_ordering(595) 00:16:50.200 fused_ordering(596) 00:16:50.200 fused_ordering(597) 00:16:50.200 fused_ordering(598) 00:16:50.200 fused_ordering(599) 00:16:50.200 fused_ordering(600) 00:16:50.200 fused_ordering(601) 00:16:50.200 fused_ordering(602) 00:16:50.200 fused_ordering(603) 00:16:50.200 fused_ordering(604) 00:16:50.200 fused_ordering(605) 00:16:50.200 fused_ordering(606) 00:16:50.200 fused_ordering(607) 00:16:50.200 fused_ordering(608) 00:16:50.200 fused_ordering(609) 00:16:50.200 fused_ordering(610) 00:16:50.200 fused_ordering(611) 00:16:50.200 fused_ordering(612) 00:16:50.200 fused_ordering(613) 00:16:50.200 fused_ordering(614) 00:16:50.200 fused_ordering(615) 00:16:50.460 fused_ordering(616) 00:16:50.460 fused_ordering(617) 00:16:50.460 fused_ordering(618) 00:16:50.460 fused_ordering(619) 00:16:50.460 fused_ordering(620) 00:16:50.460 fused_ordering(621) 00:16:50.460 fused_ordering(622) 00:16:50.460 fused_ordering(623) 00:16:50.460 fused_ordering(624) 00:16:50.460 fused_ordering(625) 00:16:50.460 fused_ordering(626) 00:16:50.460 fused_ordering(627) 00:16:50.460 fused_ordering(628) 00:16:50.460 fused_ordering(629) 00:16:50.460 fused_ordering(630) 00:16:50.460 fused_ordering(631) 00:16:50.460 fused_ordering(632) 00:16:50.460 fused_ordering(633) 00:16:50.460 fused_ordering(634) 00:16:50.460 fused_ordering(635) 00:16:50.460 fused_ordering(636) 00:16:50.460 fused_ordering(637) 00:16:50.460 fused_ordering(638) 00:16:50.460 fused_ordering(639) 00:16:50.460 fused_ordering(640) 00:16:50.460 fused_ordering(641) 00:16:50.460 fused_ordering(642) 00:16:50.460 fused_ordering(643) 00:16:50.460 fused_ordering(644) 00:16:50.460 fused_ordering(645) 00:16:50.460 fused_ordering(646) 00:16:50.460 fused_ordering(647) 00:16:50.460 fused_ordering(648) 00:16:50.460 fused_ordering(649) 00:16:50.460 fused_ordering(650) 00:16:50.460 fused_ordering(651) 00:16:50.460 fused_ordering(652) 00:16:50.460 fused_ordering(653) 00:16:50.460 fused_ordering(654) 00:16:50.460 fused_ordering(655) 00:16:50.460 fused_ordering(656) 00:16:50.460 fused_ordering(657) 00:16:50.460 fused_ordering(658) 00:16:50.460 fused_ordering(659) 00:16:50.460 fused_ordering(660) 00:16:50.460 fused_ordering(661) 00:16:50.460 fused_ordering(662) 00:16:50.460 fused_ordering(663) 00:16:50.460 fused_ordering(664) 00:16:50.460 fused_ordering(665) 00:16:50.460 fused_ordering(666) 00:16:50.460 fused_ordering(667) 00:16:50.460 fused_ordering(668) 00:16:50.460 fused_ordering(669) 00:16:50.460 fused_ordering(670) 00:16:50.460 fused_ordering(671) 00:16:50.460 fused_ordering(672) 00:16:50.460 fused_ordering(673) 00:16:50.460 fused_ordering(674) 00:16:50.460 fused_ordering(675) 00:16:50.460 fused_ordering(676) 00:16:50.460 fused_ordering(677) 00:16:50.460 fused_ordering(678) 00:16:50.460 fused_ordering(679) 00:16:50.460 fused_ordering(680) 00:16:50.460 fused_ordering(681) 00:16:50.460 fused_ordering(682) 00:16:50.460 fused_ordering(683) 00:16:50.460 fused_ordering(684) 00:16:50.460 fused_ordering(685) 00:16:50.460 fused_ordering(686) 00:16:50.460 fused_ordering(687) 00:16:50.460 fused_ordering(688) 00:16:50.460 fused_ordering(689) 00:16:50.460 fused_ordering(690) 00:16:50.460 fused_ordering(691) 00:16:50.460 fused_ordering(692) 00:16:50.460 fused_ordering(693) 00:16:50.460 fused_ordering(694) 00:16:50.460 fused_ordering(695) 00:16:50.460 fused_ordering(696) 00:16:50.460 fused_ordering(697) 00:16:50.460 fused_ordering(698) 00:16:50.460 fused_ordering(699) 00:16:50.460 fused_ordering(700) 00:16:50.460 fused_ordering(701) 00:16:50.460 fused_ordering(702) 00:16:50.460 fused_ordering(703) 00:16:50.460 fused_ordering(704) 00:16:50.460 fused_ordering(705) 00:16:50.460 fused_ordering(706) 00:16:50.460 fused_ordering(707) 00:16:50.460 fused_ordering(708) 00:16:50.460 fused_ordering(709) 00:16:50.460 fused_ordering(710) 00:16:50.460 fused_ordering(711) 00:16:50.460 fused_ordering(712) 00:16:50.460 fused_ordering(713) 00:16:50.460 fused_ordering(714) 00:16:50.460 fused_ordering(715) 00:16:50.460 fused_ordering(716) 00:16:50.460 fused_ordering(717) 00:16:50.460 fused_ordering(718) 00:16:50.460 fused_ordering(719) 00:16:50.460 fused_ordering(720) 00:16:50.460 fused_ordering(721) 00:16:50.460 fused_ordering(722) 00:16:50.460 fused_ordering(723) 00:16:50.460 fused_ordering(724) 00:16:50.460 fused_ordering(725) 00:16:50.460 fused_ordering(726) 00:16:50.460 fused_ordering(727) 00:16:50.460 fused_ordering(728) 00:16:50.460 fused_ordering(729) 00:16:50.460 fused_ordering(730) 00:16:50.460 fused_ordering(731) 00:16:50.460 fused_ordering(732) 00:16:50.460 fused_ordering(733) 00:16:50.460 fused_ordering(734) 00:16:50.460 fused_ordering(735) 00:16:50.460 fused_ordering(736) 00:16:50.460 fused_ordering(737) 00:16:50.460 fused_ordering(738) 00:16:50.460 fused_ordering(739) 00:16:50.460 fused_ordering(740) 00:16:50.460 fused_ordering(741) 00:16:50.460 fused_ordering(742) 00:16:50.460 fused_ordering(743) 00:16:50.460 fused_ordering(744) 00:16:50.460 fused_ordering(745) 00:16:50.460 fused_ordering(746) 00:16:50.460 fused_ordering(747) 00:16:50.460 fused_ordering(748) 00:16:50.460 fused_ordering(749) 00:16:50.460 fused_ordering(750) 00:16:50.460 fused_ordering(751) 00:16:50.460 fused_ordering(752) 00:16:50.460 fused_ordering(753) 00:16:50.460 fused_ordering(754) 00:16:50.460 fused_ordering(755) 00:16:50.460 fused_ordering(756) 00:16:50.460 fused_ordering(757) 00:16:50.460 fused_ordering(758) 00:16:50.460 fused_ordering(759) 00:16:50.460 fused_ordering(760) 00:16:50.460 fused_ordering(761) 00:16:50.460 fused_ordering(762) 00:16:50.460 fused_ordering(763) 00:16:50.460 fused_ordering(764) 00:16:50.460 fused_ordering(765) 00:16:50.460 fused_ordering(766) 00:16:50.460 fused_ordering(767) 00:16:50.460 fused_ordering(768) 00:16:50.460 fused_ordering(769) 00:16:50.460 fused_ordering(770) 00:16:50.460 fused_ordering(771) 00:16:50.460 fused_ordering(772) 00:16:50.460 fused_ordering(773) 00:16:50.460 fused_ordering(774) 00:16:50.460 fused_ordering(775) 00:16:50.460 fused_ordering(776) 00:16:50.460 fused_ordering(777) 00:16:50.460 fused_ordering(778) 00:16:50.460 fused_ordering(779) 00:16:50.460 fused_ordering(780) 00:16:50.460 fused_ordering(781) 00:16:50.460 fused_ordering(782) 00:16:50.460 fused_ordering(783) 00:16:50.460 fused_ordering(784) 00:16:50.460 fused_ordering(785) 00:16:50.460 fused_ordering(786) 00:16:50.460 fused_ordering(787) 00:16:50.460 fused_ordering(788) 00:16:50.460 fused_ordering(789) 00:16:50.461 fused_ordering(790) 00:16:50.461 fused_ordering(791) 00:16:50.461 fused_ordering(792) 00:16:50.461 fused_ordering(793) 00:16:50.461 fused_ordering(794) 00:16:50.461 fused_ordering(795) 00:16:50.461 fused_ordering(796) 00:16:50.461 fused_ordering(797) 00:16:50.461 fused_ordering(798) 00:16:50.461 fused_ordering(799) 00:16:50.461 fused_ordering(800) 00:16:50.461 fused_ordering(801) 00:16:50.461 fused_ordering(802) 00:16:50.461 fused_ordering(803) 00:16:50.461 fused_ordering(804) 00:16:50.461 fused_ordering(805) 00:16:50.461 fused_ordering(806) 00:16:50.461 fused_ordering(807) 00:16:50.461 fused_ordering(808) 00:16:50.461 fused_ordering(809) 00:16:50.461 fused_ordering(810) 00:16:50.461 fused_ordering(811) 00:16:50.461 fused_ordering(812) 00:16:50.461 fused_ordering(813) 00:16:50.461 fused_ordering(814) 00:16:50.461 fused_ordering(815) 00:16:50.461 fused_ordering(816) 00:16:50.461 fused_ordering(817) 00:16:50.461 fused_ordering(818) 00:16:50.461 fused_ordering(819) 00:16:50.461 fused_ordering(820) 00:16:50.461 fused_ordering(821) 00:16:50.461 fused_ordering(822) 00:16:50.461 fused_ordering(823) 00:16:50.461 fused_ordering(824) 00:16:50.461 fused_ordering(825) 00:16:50.461 fused_ordering(826) 00:16:50.461 fused_ordering(827) 00:16:50.461 fused_ordering(828) 00:16:50.461 fused_ordering(829) 00:16:50.461 fused_ordering(830) 00:16:50.461 fused_ordering(831) 00:16:50.461 fused_ordering(832) 00:16:50.461 fused_ordering(833) 00:16:50.461 fused_ordering(834) 00:16:50.461 fused_ordering(835) 00:16:50.461 fused_ordering(836) 00:16:50.461 fused_ordering(837) 00:16:50.461 fused_ordering(838) 00:16:50.461 fused_ordering(839) 00:16:50.461 fused_ordering(840) 00:16:50.461 fused_ordering(841) 00:16:50.461 fused_ordering(842) 00:16:50.461 fused_ordering(843) 00:16:50.461 fused_ordering(844) 00:16:50.461 fused_ordering(845) 00:16:50.461 fused_ordering(846) 00:16:50.461 fused_ordering(847) 00:16:50.461 fused_ordering(848) 00:16:50.461 fused_ordering(849) 00:16:50.461 fused_ordering(850) 00:16:50.461 fused_ordering(851) 00:16:50.461 fused_ordering(852) 00:16:50.461 fused_ordering(853) 00:16:50.461 fused_ordering(854) 00:16:50.461 fused_ordering(855) 00:16:50.461 fused_ordering(856) 00:16:50.461 fused_ordering(857) 00:16:50.461 fused_ordering(858) 00:16:50.461 fused_ordering(859) 00:16:50.461 fused_ordering(860) 00:16:50.461 fused_ordering(861) 00:16:50.461 fused_ordering(862) 00:16:50.461 fused_ordering(863) 00:16:50.461 fused_ordering(864) 00:16:50.461 fused_ordering(865) 00:16:50.461 fused_ordering(866) 00:16:50.461 fused_ordering(867) 00:16:50.461 fused_ordering(868) 00:16:50.461 fused_ordering(869) 00:16:50.461 fused_ordering(870) 00:16:50.461 fused_ordering(871) 00:16:50.461 fused_ordering(872) 00:16:50.461 fused_ordering(873) 00:16:50.461 fused_ordering(874) 00:16:50.461 fused_ordering(875) 00:16:50.461 fused_ordering(876) 00:16:50.461 fused_ordering(877) 00:16:50.461 fused_ordering(878) 00:16:50.461 fused_ordering(879) 00:16:50.461 fused_ordering(880) 00:16:50.461 fused_ordering(881) 00:16:50.461 fused_ordering(882) 00:16:50.461 fused_ordering(883) 00:16:50.461 fused_ordering(884) 00:16:50.461 fused_ordering(885) 00:16:50.461 fused_ordering(886) 00:16:50.461 fused_ordering(887) 00:16:50.461 fused_ordering(888) 00:16:50.461 fused_ordering(889) 00:16:50.461 fused_ordering(890) 00:16:50.461 fused_ordering(891) 00:16:50.461 fused_ordering(892) 00:16:50.461 fused_ordering(893) 00:16:50.461 fused_ordering(894) 00:16:50.461 fused_ordering(895) 00:16:50.461 fused_ordering(896) 00:16:50.461 fused_ordering(897) 00:16:50.461 fused_ordering(898) 00:16:50.461 fused_ordering(899) 00:16:50.461 fused_ordering(900) 00:16:50.461 fused_ordering(901) 00:16:50.461 fused_ordering(902) 00:16:50.461 fused_ordering(903) 00:16:50.461 fused_ordering(904) 00:16:50.461 fused_ordering(905) 00:16:50.461 fused_ordering(906) 00:16:50.461 fused_ordering(907) 00:16:50.461 fused_ordering(908) 00:16:50.461 fused_ordering(909) 00:16:50.461 fused_ordering(910) 00:16:50.461 fused_ordering(911) 00:16:50.461 fused_ordering(912) 00:16:50.461 fused_ordering(913) 00:16:50.461 fused_ordering(914) 00:16:50.461 fused_ordering(915) 00:16:50.461 fused_ordering(916) 00:16:50.461 fused_ordering(917) 00:16:50.461 fused_ordering(918) 00:16:50.461 fused_ordering(919) 00:16:50.461 fused_ordering(920) 00:16:50.461 fused_ordering(921) 00:16:50.461 fused_ordering(922) 00:16:50.461 fused_ordering(923) 00:16:50.461 fused_ordering(924) 00:16:50.461 fused_ordering(925) 00:16:50.461 fused_ordering(926) 00:16:50.461 fused_ordering(927) 00:16:50.461 fused_ordering(928) 00:16:50.461 fused_ordering(929) 00:16:50.461 fused_ordering(930) 00:16:50.461 fused_ordering(931) 00:16:50.461 fused_ordering(932) 00:16:50.461 fused_ordering(933) 00:16:50.461 fused_ordering(934) 00:16:50.461 fused_ordering(935) 00:16:50.461 fused_ordering(936) 00:16:50.461 fused_ordering(937) 00:16:50.461 fused_ordering(938) 00:16:50.461 fused_ordering(939) 00:16:50.461 fused_ordering(940) 00:16:50.461 fused_ordering(941) 00:16:50.461 fused_ordering(942) 00:16:50.461 fused_ordering(943) 00:16:50.461 fused_ordering(944) 00:16:50.461 fused_ordering(945) 00:16:50.461 fused_ordering(946) 00:16:50.461 fused_ordering(947) 00:16:50.461 fused_ordering(948) 00:16:50.461 fused_ordering(949) 00:16:50.461 fused_ordering(950) 00:16:50.461 fused_ordering(951) 00:16:50.461 fused_ordering(952) 00:16:50.461 fused_ordering(953) 00:16:50.461 fused_ordering(954) 00:16:50.461 fused_ordering(955) 00:16:50.461 fused_ordering(956) 00:16:50.461 fused_ordering(957) 00:16:50.461 fused_ordering(958) 00:16:50.461 fused_ordering(959) 00:16:50.461 fused_ordering(960) 00:16:50.461 fused_ordering(961) 00:16:50.461 fused_ordering(962) 00:16:50.461 fused_ordering(963) 00:16:50.461 fused_ordering(964) 00:16:50.461 fused_ordering(965) 00:16:50.461 fused_ordering(966) 00:16:50.461 fused_ordering(967) 00:16:50.461 fused_ordering(968) 00:16:50.461 fused_ordering(969) 00:16:50.461 fused_ordering(970) 00:16:50.461 fused_ordering(971) 00:16:50.461 fused_ordering(972) 00:16:50.461 fused_ordering(973) 00:16:50.461 fused_ordering(974) 00:16:50.461 fused_ordering(975) 00:16:50.461 fused_ordering(976) 00:16:50.461 fused_ordering(977) 00:16:50.461 fused_ordering(978) 00:16:50.461 fused_ordering(979) 00:16:50.461 fused_ordering(980) 00:16:50.461 fused_ordering(981) 00:16:50.461 fused_ordering(982) 00:16:50.461 fused_ordering(983) 00:16:50.461 fused_ordering(984) 00:16:50.461 fused_ordering(985) 00:16:50.461 fused_ordering(986) 00:16:50.461 fused_ordering(987) 00:16:50.461 fused_ordering(988) 00:16:50.461 fused_ordering(989) 00:16:50.461 fused_ordering(990) 00:16:50.461 fused_ordering(991) 00:16:50.461 fused_ordering(992) 00:16:50.461 fused_ordering(993) 00:16:50.461 fused_ordering(994) 00:16:50.461 fused_ordering(995) 00:16:50.461 fused_ordering(996) 00:16:50.461 fused_ordering(997) 00:16:50.461 fused_ordering(998) 00:16:50.461 fused_ordering(999) 00:16:50.461 fused_ordering(1000) 00:16:50.461 fused_ordering(1001) 00:16:50.461 fused_ordering(1002) 00:16:50.461 fused_ordering(1003) 00:16:50.461 fused_ordering(1004) 00:16:50.461 fused_ordering(1005) 00:16:50.461 fused_ordering(1006) 00:16:50.461 fused_ordering(1007) 00:16:50.461 fused_ordering(1008) 00:16:50.461 fused_ordering(1009) 00:16:50.461 fused_ordering(1010) 00:16:50.461 fused_ordering(1011) 00:16:50.461 fused_ordering(1012) 00:16:50.461 fused_ordering(1013) 00:16:50.461 fused_ordering(1014) 00:16:50.461 fused_ordering(1015) 00:16:50.461 fused_ordering(1016) 00:16:50.461 fused_ordering(1017) 00:16:50.461 fused_ordering(1018) 00:16:50.461 fused_ordering(1019) 00:16:50.461 fused_ordering(1020) 00:16:50.461 fused_ordering(1021) 00:16:50.461 fused_ordering(1022) 00:16:50.461 fused_ordering(1023) 00:16:50.461 22:03:01 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:50.461 22:03:01 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:50.461 22:03:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:50.461 22:03:01 -- nvmf/common.sh@116 -- # sync 00:16:50.461 22:03:01 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:50.461 22:03:01 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:50.461 22:03:01 -- nvmf/common.sh@119 -- # set +e 00:16:50.461 22:03:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:50.461 22:03:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:50.461 rmmod nvme_rdma 00:16:50.461 rmmod nvme_fabrics 00:16:50.461 22:03:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:50.721 22:03:01 -- nvmf/common.sh@123 -- # set -e 00:16:50.721 22:03:01 -- nvmf/common.sh@124 -- # return 0 00:16:50.721 22:03:01 -- nvmf/common.sh@477 -- # '[' -n 2155608 ']' 00:16:50.721 22:03:01 -- nvmf/common.sh@478 -- # killprocess 2155608 00:16:50.721 22:03:01 -- common/autotest_common.sh@926 -- # '[' -z 2155608 ']' 00:16:50.721 22:03:01 -- common/autotest_common.sh@930 -- # kill -0 2155608 00:16:50.721 22:03:01 -- common/autotest_common.sh@931 -- # uname 00:16:50.721 22:03:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.721 22:03:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2155608 00:16:50.721 22:03:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:50.721 22:03:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:50.721 22:03:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2155608' 00:16:50.721 killing process with pid 2155608 00:16:50.721 22:03:01 -- common/autotest_common.sh@945 -- # kill 2155608 00:16:50.721 22:03:01 -- common/autotest_common.sh@950 -- # wait 2155608 00:16:50.980 22:03:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:50.980 22:03:01 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:50.980 00:16:50.980 real 0m10.828s 00:16:50.980 user 0m5.105s 00:16:50.980 sys 0m7.107s 00:16:50.980 22:03:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.980 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:16:50.980 ************************************ 00:16:50.980 END TEST nvmf_fused_ordering 00:16:50.980 ************************************ 00:16:50.980 22:03:01 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:50.980 22:03:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:50.980 22:03:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:50.980 22:03:01 -- common/autotest_common.sh@10 -- # set +x 00:16:50.980 ************************************ 00:16:50.980 START TEST nvmf_delete_subsystem 00:16:50.980 ************************************ 00:16:50.980 22:03:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:16:50.980 * Looking for test storage... 00:16:50.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:50.980 22:03:02 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.980 22:03:02 -- nvmf/common.sh@7 -- # uname -s 00:16:50.980 22:03:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.980 22:03:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.980 22:03:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.980 22:03:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.980 22:03:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.980 22:03:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.980 22:03:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.980 22:03:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.980 22:03:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.980 22:03:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.980 22:03:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:50.980 22:03:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:50.980 22:03:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.980 22:03:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.980 22:03:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.981 22:03:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:50.981 22:03:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.981 22:03:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.981 22:03:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.981 22:03:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.981 22:03:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.981 22:03:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.981 22:03:02 -- paths/export.sh@5 -- # export PATH 00:16:50.981 22:03:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.981 22:03:02 -- nvmf/common.sh@46 -- # : 0 00:16:50.981 22:03:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.981 22:03:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.981 22:03:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.981 22:03:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.981 22:03:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.981 22:03:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.981 22:03:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.981 22:03:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.981 22:03:02 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:50.981 22:03:02 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:50.981 22:03:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.981 22:03:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.981 22:03:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.981 22:03:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.981 22:03:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.981 22:03:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.981 22:03:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.981 22:03:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:50.981 22:03:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:50.981 22:03:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:50.981 22:03:02 -- common/autotest_common.sh@10 -- # set +x 00:16:59.103 22:03:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.103 22:03:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:59.103 22:03:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:59.103 22:03:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:59.103 22:03:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:59.103 22:03:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:59.103 22:03:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:59.103 22:03:09 -- nvmf/common.sh@294 -- # net_devs=() 00:16:59.103 22:03:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:59.103 22:03:09 -- nvmf/common.sh@295 -- # e810=() 00:16:59.103 22:03:09 -- nvmf/common.sh@295 -- # local -ga e810 00:16:59.103 22:03:09 -- nvmf/common.sh@296 -- # x722=() 00:16:59.103 22:03:09 -- nvmf/common.sh@296 -- # local -ga x722 00:16:59.103 22:03:09 -- nvmf/common.sh@297 -- # mlx=() 00:16:59.103 22:03:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:59.103 22:03:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.103 22:03:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.103 22:03:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.103 22:03:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.103 22:03:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.104 22:03:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:59.104 22:03:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:59.104 22:03:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:59.104 22:03:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:59.104 22:03:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:59.104 22:03:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:59.104 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:59.104 22:03:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:59.104 22:03:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:59.104 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:59.104 22:03:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:59.104 22:03:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:59.104 22:03:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.104 22:03:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.104 22:03:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.104 22:03:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:59.104 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:59.104 22:03:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.104 22:03:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.104 22:03:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.104 22:03:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.104 22:03:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:59.104 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:59.104 22:03:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.104 22:03:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:59.104 22:03:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:59.104 22:03:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:59.104 22:03:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:59.104 22:03:09 -- nvmf/common.sh@57 -- # uname 00:16:59.104 22:03:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:59.104 22:03:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:59.104 22:03:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:59.104 22:03:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:59.104 22:03:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:59.104 22:03:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:59.104 22:03:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:59.104 22:03:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:59.104 22:03:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:59.104 22:03:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:59.104 22:03:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:59.104 22:03:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:59.104 22:03:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:59.104 22:03:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:59.104 22:03:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:59.104 22:03:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:59.104 22:03:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:59.104 22:03:09 -- nvmf/common.sh@104 -- # continue 2 00:16:59.104 22:03:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.104 22:03:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:59.104 22:03:09 -- nvmf/common.sh@104 -- # continue 2 00:16:59.104 22:03:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:59.104 22:03:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:59.104 22:03:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:59.104 22:03:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:59.104 22:03:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:59.104 22:03:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:59.104 22:03:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:59.104 22:03:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:59.104 22:03:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:59.104 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:59.104 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:59.104 altname enp217s0f0np0 00:16:59.104 altname ens818f0np0 00:16:59.104 inet 192.168.100.8/24 scope global mlx_0_0 00:16:59.104 valid_lft forever preferred_lft forever 00:16:59.104 22:03:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:59.104 22:03:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:59.104 22:03:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:59.104 22:03:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:59.104 22:03:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:59.104 22:03:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:59.104 22:03:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:59.104 22:03:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:59.104 22:03:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:59.104 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:59.104 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:59.104 altname enp217s0f1np1 00:16:59.104 altname ens818f1np1 00:16:59.104 inet 192.168.100.9/24 scope global mlx_0_1 00:16:59.104 valid_lft forever preferred_lft forever 00:16:59.104 22:03:10 -- nvmf/common.sh@410 -- # return 0 00:16:59.104 22:03:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:59.104 22:03:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:59.104 22:03:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:59.104 22:03:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:59.104 22:03:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:59.104 22:03:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:59.104 22:03:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:59.104 22:03:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:59.104 22:03:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:59.104 22:03:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:59.104 22:03:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:59.104 22:03:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.104 22:03:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:59.104 22:03:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:59.104 22:03:10 -- nvmf/common.sh@104 -- # continue 2 00:16:59.104 22:03:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:59.104 22:03:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.104 22:03:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:59.104 22:03:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:59.104 22:03:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:59.104 22:03:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:59.104 22:03:10 -- nvmf/common.sh@104 -- # continue 2 00:16:59.104 22:03:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:59.104 22:03:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:59.104 22:03:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:59.104 22:03:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:59.105 22:03:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:59.105 22:03:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:59.105 22:03:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:59.105 22:03:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:59.105 22:03:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:59.105 22:03:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:59.105 22:03:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:59.105 22:03:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:59.105 22:03:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:59.105 192.168.100.9' 00:16:59.105 22:03:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:59.105 192.168.100.9' 00:16:59.105 22:03:10 -- nvmf/common.sh@445 -- # head -n 1 00:16:59.105 22:03:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:59.105 22:03:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:59.105 192.168.100.9' 00:16:59.105 22:03:10 -- nvmf/common.sh@446 -- # tail -n +2 00:16:59.105 22:03:10 -- nvmf/common.sh@446 -- # head -n 1 00:16:59.105 22:03:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:59.105 22:03:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:59.105 22:03:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:59.105 22:03:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:59.105 22:03:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:59.105 22:03:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:59.105 22:03:10 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:59.105 22:03:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:59.105 22:03:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:59.105 22:03:10 -- common/autotest_common.sh@10 -- # set +x 00:16:59.105 22:03:10 -- nvmf/common.sh@469 -- # nvmfpid=2160429 00:16:59.105 22:03:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:59.105 22:03:10 -- nvmf/common.sh@470 -- # waitforlisten 2160429 00:16:59.105 22:03:10 -- common/autotest_common.sh@819 -- # '[' -z 2160429 ']' 00:16:59.105 22:03:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.105 22:03:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.105 22:03:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.105 22:03:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.105 22:03:10 -- common/autotest_common.sh@10 -- # set +x 00:16:59.105 [2024-07-26 22:03:10.190389] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:59.105 [2024-07-26 22:03:10.190441] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.105 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.105 [2024-07-26 22:03:10.276491] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:59.105 [2024-07-26 22:03:10.312939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:59.105 [2024-07-26 22:03:10.313056] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.105 [2024-07-26 22:03:10.313066] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.105 [2024-07-26 22:03:10.313074] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.105 [2024-07-26 22:03:10.313122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.105 [2024-07-26 22:03:10.313125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.043 22:03:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.043 22:03:10 -- common/autotest_common.sh@852 -- # return 0 00:17:00.044 22:03:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:00.044 22:03:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:00.044 22:03:10 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 22:03:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:00.044 22:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.044 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 [2024-07-26 22:03:11.040962] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2260e80/0x2265370) succeed. 00:17:00.044 [2024-07-26 22:03:11.049878] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2262380/0x22a6a00) succeed. 00:17:00.044 22:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:00.044 22:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.044 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 22:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:00.044 22:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.044 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 [2024-07-26 22:03:11.134092] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:00.044 22:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:00.044 22:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.044 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 NULL1 00:17:00.044 22:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:00.044 22:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.044 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 Delay0 00:17:00.044 22:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:00.044 22:03:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.044 22:03:11 -- common/autotest_common.sh@10 -- # set +x 00:17:00.044 22:03:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@28 -- # perf_pid=2160654 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:00.044 22:03:11 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:00.044 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.044 [2024-07-26 22:03:11.240961] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:01.950 22:03:13 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.950 22:03:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.950 22:03:13 -- common/autotest_common.sh@10 -- # set +x 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 NVMe io qpair process completion error 00:17:03.329 22:03:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:03.329 22:03:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:03.329 22:03:14 -- target/delete_subsystem.sh@35 -- # kill -0 2160654 00:17:03.329 22:03:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:03.899 22:03:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:03.899 22:03:14 -- target/delete_subsystem.sh@35 -- # kill -0 2160654 00:17:03.899 22:03:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Write completed with error (sct=0, sc=8) 00:17:04.161 starting I/O failed: -6 00:17:04.161 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 starting I/O failed: -6 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Write completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.162 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Write completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 Read completed with error (sct=0, sc=8) 00:17:04.163 22:03:15 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:04.163 22:03:15 -- target/delete_subsystem.sh@35 -- # kill -0 2160654 00:17:04.163 22:03:15 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:04.163 [2024-07-26 22:03:15.338567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:04.163 [2024-07-26 22:03:15.338608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:04.163 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:04.163 Initializing NVMe Controllers 00:17:04.163 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.163 Controller IO queue size 128, less than required. 00:17:04.163 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:04.163 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:04.163 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:04.163 Initialization complete. Launching workers. 00:17:04.163 ======================================================== 00:17:04.163 Latency(us) 00:17:04.163 Device Information : IOPS MiB/s Average min max 00:17:04.163 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.34 0.04 1595795.35 1000074.73 2983275.12 00:17:04.163 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.34 0.04 1597144.66 1000432.83 2984240.62 00:17:04.163 ======================================================== 00:17:04.163 Total : 160.69 0.08 1596470.01 1000074.73 2984240.62 00:17:04.163 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@35 -- # kill -0 2160654 00:17:04.732 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2160654) - No such process 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@45 -- # NOT wait 2160654 00:17:04.732 22:03:15 -- common/autotest_common.sh@640 -- # local es=0 00:17:04.732 22:03:15 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2160654 00:17:04.732 22:03:15 -- common/autotest_common.sh@628 -- # local arg=wait 00:17:04.732 22:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:04.732 22:03:15 -- common/autotest_common.sh@632 -- # type -t wait 00:17:04.732 22:03:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:04.732 22:03:15 -- common/autotest_common.sh@643 -- # wait 2160654 00:17:04.732 22:03:15 -- common/autotest_common.sh@643 -- # es=1 00:17:04.732 22:03:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:04.732 22:03:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:04.732 22:03:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:04.732 22:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.732 22:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:04.732 22:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:04.732 22:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.732 22:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:04.732 [2024-07-26 22:03:15.858270] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:04.732 22:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:04.732 22:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:04.732 22:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:04.732 22:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@54 -- # perf_pid=2161471 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:04.732 22:03:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:04.732 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.732 [2024-07-26 22:03:15.944325] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:05.300 22:03:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:05.300 22:03:16 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:05.300 22:03:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:05.868 22:03:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:05.868 22:03:16 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:05.868 22:03:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:06.436 22:03:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:06.436 22:03:17 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:06.436 22:03:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:06.695 22:03:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:06.695 22:03:17 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:06.695 22:03:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:07.263 22:03:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:07.263 22:03:18 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:07.263 22:03:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:07.831 22:03:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:07.831 22:03:18 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:07.831 22:03:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:08.398 22:03:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:08.398 22:03:19 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:08.399 22:03:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:08.966 22:03:19 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:08.966 22:03:19 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:08.966 22:03:19 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:09.225 22:03:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:09.225 22:03:20 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:09.225 22:03:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:09.793 22:03:20 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:09.793 22:03:20 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:09.793 22:03:20 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:10.360 22:03:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:10.360 22:03:21 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:10.360 22:03:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:10.926 22:03:21 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:10.926 22:03:21 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:10.926 22:03:21 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:11.495 22:03:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:11.495 22:03:22 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:11.495 22:03:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:11.754 22:03:22 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:11.754 22:03:22 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:11.754 22:03:22 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:12.013 Initializing NVMe Controllers 00:17:12.013 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:12.013 Controller IO queue size 128, less than required. 00:17:12.013 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:12.013 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:12.013 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:12.013 Initialization complete. Launching workers. 00:17:12.013 ======================================================== 00:17:12.013 Latency(us) 00:17:12.013 Device Information : IOPS MiB/s Average min max 00:17:12.013 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001555.90 1000057.59 1003966.39 00:17:12.013 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002661.34 1000087.49 1006294.94 00:17:12.013 ======================================================== 00:17:12.013 Total : 256.00 0.12 1002108.62 1000057.59 1006294.94 00:17:12.013 00:17:12.271 22:03:23 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:12.271 22:03:23 -- target/delete_subsystem.sh@57 -- # kill -0 2161471 00:17:12.271 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2161471) - No such process 00:17:12.271 22:03:23 -- target/delete_subsystem.sh@67 -- # wait 2161471 00:17:12.271 22:03:23 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:12.272 22:03:23 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:12.272 22:03:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:12.272 22:03:23 -- nvmf/common.sh@116 -- # sync 00:17:12.272 22:03:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:12.272 22:03:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:12.272 22:03:23 -- nvmf/common.sh@119 -- # set +e 00:17:12.272 22:03:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:12.272 22:03:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:12.272 rmmod nvme_rdma 00:17:12.272 rmmod nvme_fabrics 00:17:12.530 22:03:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:12.530 22:03:23 -- nvmf/common.sh@123 -- # set -e 00:17:12.530 22:03:23 -- nvmf/common.sh@124 -- # return 0 00:17:12.530 22:03:23 -- nvmf/common.sh@477 -- # '[' -n 2160429 ']' 00:17:12.530 22:03:23 -- nvmf/common.sh@478 -- # killprocess 2160429 00:17:12.530 22:03:23 -- common/autotest_common.sh@926 -- # '[' -z 2160429 ']' 00:17:12.530 22:03:23 -- common/autotest_common.sh@930 -- # kill -0 2160429 00:17:12.530 22:03:23 -- common/autotest_common.sh@931 -- # uname 00:17:12.530 22:03:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.530 22:03:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2160429 00:17:12.530 22:03:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:12.530 22:03:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:12.530 22:03:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2160429' 00:17:12.530 killing process with pid 2160429 00:17:12.530 22:03:23 -- common/autotest_common.sh@945 -- # kill 2160429 00:17:12.530 22:03:23 -- common/autotest_common.sh@950 -- # wait 2160429 00:17:12.788 22:03:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:12.788 22:03:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:12.788 00:17:12.788 real 0m21.786s 00:17:12.788 user 0m50.219s 00:17:12.788 sys 0m7.517s 00:17:12.788 22:03:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:12.788 22:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:12.788 ************************************ 00:17:12.788 END TEST nvmf_delete_subsystem 00:17:12.788 ************************************ 00:17:12.788 22:03:23 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:12.788 22:03:23 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:12.788 22:03:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:12.788 22:03:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:12.788 22:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:12.788 ************************************ 00:17:12.788 START TEST nvmf_nvme_cli 00:17:12.788 ************************************ 00:17:12.788 22:03:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:12.788 * Looking for test storage... 00:17:12.788 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:12.788 22:03:23 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.788 22:03:23 -- nvmf/common.sh@7 -- # uname -s 00:17:12.788 22:03:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.788 22:03:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.788 22:03:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.788 22:03:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.788 22:03:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.788 22:03:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.788 22:03:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.788 22:03:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.788 22:03:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.788 22:03:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.788 22:03:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:12.788 22:03:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:12.788 22:03:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.788 22:03:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.788 22:03:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:12.788 22:03:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:12.788 22:03:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.788 22:03:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.788 22:03:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.788 22:03:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.788 22:03:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.788 22:03:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.788 22:03:23 -- paths/export.sh@5 -- # export PATH 00:17:12.788 22:03:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.788 22:03:23 -- nvmf/common.sh@46 -- # : 0 00:17:12.788 22:03:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:12.789 22:03:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:12.789 22:03:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:12.789 22:03:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.789 22:03:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.789 22:03:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:12.789 22:03:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:12.789 22:03:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:12.789 22:03:23 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:12.789 22:03:23 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:12.789 22:03:23 -- target/nvme_cli.sh@14 -- # devs=() 00:17:12.789 22:03:23 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:12.789 22:03:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:12.789 22:03:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.789 22:03:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:12.789 22:03:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:12.789 22:03:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:12.789 22:03:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.789 22:03:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:12.789 22:03:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.789 22:03:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:12.789 22:03:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:12.789 22:03:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:12.789 22:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:20.911 22:03:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.911 22:03:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:20.911 22:03:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:20.911 22:03:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:20.911 22:03:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:20.911 22:03:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:20.911 22:03:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:20.911 22:03:31 -- nvmf/common.sh@294 -- # net_devs=() 00:17:20.911 22:03:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:20.911 22:03:31 -- nvmf/common.sh@295 -- # e810=() 00:17:20.911 22:03:31 -- nvmf/common.sh@295 -- # local -ga e810 00:17:20.911 22:03:31 -- nvmf/common.sh@296 -- # x722=() 00:17:20.911 22:03:31 -- nvmf/common.sh@296 -- # local -ga x722 00:17:20.911 22:03:31 -- nvmf/common.sh@297 -- # mlx=() 00:17:20.911 22:03:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:20.911 22:03:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.911 22:03:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:20.911 22:03:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:20.911 22:03:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:20.911 22:03:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:20.911 22:03:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:20.911 22:03:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:20.911 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:20.911 22:03:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:20.911 22:03:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:20.911 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:20.911 22:03:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:20.911 22:03:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:20.911 22:03:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.911 22:03:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.911 22:03:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.911 22:03:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:20.911 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:20.911 22:03:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.911 22:03:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.911 22:03:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.911 22:03:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.911 22:03:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:20.911 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:20.911 22:03:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.911 22:03:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:20.911 22:03:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:20.911 22:03:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:20.911 22:03:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:20.911 22:03:31 -- nvmf/common.sh@57 -- # uname 00:17:20.911 22:03:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:20.911 22:03:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:20.911 22:03:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:20.911 22:03:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:20.911 22:03:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:20.911 22:03:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:20.911 22:03:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:20.911 22:03:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:20.911 22:03:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:20.911 22:03:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:20.911 22:03:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:20.911 22:03:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:20.911 22:03:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:20.911 22:03:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:20.911 22:03:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:20.911 22:03:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:20.911 22:03:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:20.911 22:03:31 -- nvmf/common.sh@104 -- # continue 2 00:17:20.911 22:03:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.911 22:03:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:20.911 22:03:31 -- nvmf/common.sh@104 -- # continue 2 00:17:20.911 22:03:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:20.911 22:03:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:20.911 22:03:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:20.911 22:03:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:20.911 22:03:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:20.911 22:03:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:20.911 22:03:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:20.911 22:03:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:20.911 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:20.911 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:20.911 altname enp217s0f0np0 00:17:20.911 altname ens818f0np0 00:17:20.911 inet 192.168.100.8/24 scope global mlx_0_0 00:17:20.911 valid_lft forever preferred_lft forever 00:17:20.911 22:03:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:20.911 22:03:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:20.911 22:03:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:20.911 22:03:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:20.911 22:03:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:20.911 22:03:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:20.911 22:03:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:20.911 22:03:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:20.911 22:03:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:20.911 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:20.911 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:20.911 altname enp217s0f1np1 00:17:20.912 altname ens818f1np1 00:17:20.912 inet 192.168.100.9/24 scope global mlx_0_1 00:17:20.912 valid_lft forever preferred_lft forever 00:17:20.912 22:03:31 -- nvmf/common.sh@410 -- # return 0 00:17:20.912 22:03:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.912 22:03:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:20.912 22:03:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:20.912 22:03:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:20.912 22:03:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:20.912 22:03:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:20.912 22:03:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:20.912 22:03:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:20.912 22:03:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:20.912 22:03:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:20.912 22:03:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:20.912 22:03:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.912 22:03:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:20.912 22:03:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:20.912 22:03:31 -- nvmf/common.sh@104 -- # continue 2 00:17:20.912 22:03:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:20.912 22:03:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.912 22:03:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:20.912 22:03:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:20.912 22:03:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:20.912 22:03:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:20.912 22:03:31 -- nvmf/common.sh@104 -- # continue 2 00:17:20.912 22:03:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:20.912 22:03:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:20.912 22:03:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:20.912 22:03:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:20.912 22:03:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:20.912 22:03:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:20.912 22:03:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:20.912 22:03:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:20.912 22:03:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:20.912 22:03:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:20.912 22:03:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:20.912 22:03:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:20.912 22:03:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:20.912 192.168.100.9' 00:17:20.912 22:03:31 -- nvmf/common.sh@445 -- # head -n 1 00:17:20.912 22:03:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:20.912 192.168.100.9' 00:17:20.912 22:03:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:20.912 22:03:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:20.912 192.168.100.9' 00:17:20.912 22:03:31 -- nvmf/common.sh@446 -- # tail -n +2 00:17:20.912 22:03:31 -- nvmf/common.sh@446 -- # head -n 1 00:17:20.912 22:03:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:20.912 22:03:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:20.912 22:03:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:20.912 22:03:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:20.912 22:03:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:20.912 22:03:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:20.912 22:03:31 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:20.912 22:03:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.912 22:03:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:20.912 22:03:31 -- common/autotest_common.sh@10 -- # set +x 00:17:20.912 22:03:31 -- nvmf/common.sh@469 -- # nvmfpid=2166753 00:17:20.912 22:03:31 -- nvmf/common.sh@470 -- # waitforlisten 2166753 00:17:20.912 22:03:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:20.912 22:03:31 -- common/autotest_common.sh@819 -- # '[' -z 2166753 ']' 00:17:20.912 22:03:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.912 22:03:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.912 22:03:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.912 22:03:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.912 22:03:31 -- common/autotest_common.sh@10 -- # set +x 00:17:20.912 [2024-07-26 22:03:31.825360] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:20.912 [2024-07-26 22:03:31.825411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.912 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.912 [2024-07-26 22:03:31.909072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:20.912 [2024-07-26 22:03:31.948485] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.912 [2024-07-26 22:03:31.948592] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.912 [2024-07-26 22:03:31.948603] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.912 [2024-07-26 22:03:31.948612] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.912 [2024-07-26 22:03:31.948660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.912 [2024-07-26 22:03:31.948755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.912 [2024-07-26 22:03:31.948839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:20.912 [2024-07-26 22:03:31.948841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.480 22:03:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.480 22:03:32 -- common/autotest_common.sh@852 -- # return 0 00:17:21.480 22:03:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.480 22:03:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:21.480 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.480 22:03:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.480 22:03:32 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:21.480 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.480 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 [2024-07-26 22:03:32.709037] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11aa4b0/0x11ae9a0) succeed. 00:17:21.739 [2024-07-26 22:03:32.719501] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11abaa0/0x11f0030) succeed. 00:17:21.739 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.739 22:03:32 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:21.739 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.739 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 Malloc0 00:17:21.739 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.739 22:03:32 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:21.739 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.740 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.740 Malloc1 00:17:21.740 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.740 22:03:32 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:21.740 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.740 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.740 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.740 22:03:32 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:21.740 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.740 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.740 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.740 22:03:32 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:21.740 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.740 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.740 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.740 22:03:32 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:21.740 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.740 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.740 [2024-07-26 22:03:32.918329] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:21.740 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.740 22:03:32 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:21.740 22:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:21.740 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:17:21.740 22:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:21.740 22:03:32 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:21.999 00:17:21.999 Discovery Log Number of Records 2, Generation counter 2 00:17:21.999 =====Discovery Log Entry 0====== 00:17:21.999 trtype: rdma 00:17:21.999 adrfam: ipv4 00:17:21.999 subtype: current discovery subsystem 00:17:21.999 treq: not required 00:17:21.999 portid: 0 00:17:21.999 trsvcid: 4420 00:17:21.999 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:21.999 traddr: 192.168.100.8 00:17:21.999 eflags: explicit discovery connections, duplicate discovery information 00:17:21.999 rdma_prtype: not specified 00:17:21.999 rdma_qptype: connected 00:17:21.999 rdma_cms: rdma-cm 00:17:21.999 rdma_pkey: 0x0000 00:17:21.999 =====Discovery Log Entry 1====== 00:17:21.999 trtype: rdma 00:17:21.999 adrfam: ipv4 00:17:21.999 subtype: nvme subsystem 00:17:21.999 treq: not required 00:17:21.999 portid: 0 00:17:21.999 trsvcid: 4420 00:17:21.999 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:21.999 traddr: 192.168.100.8 00:17:21.999 eflags: none 00:17:21.999 rdma_prtype: not specified 00:17:21.999 rdma_qptype: connected 00:17:21.999 rdma_cms: rdma-cm 00:17:21.999 rdma_pkey: 0x0000 00:17:21.999 22:03:33 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:21.999 22:03:33 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:21.999 22:03:33 -- nvmf/common.sh@510 -- # local dev _ 00:17:21.999 22:03:33 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:21.999 22:03:33 -- nvmf/common.sh@509 -- # nvme list 00:17:21.999 22:03:33 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:21.999 22:03:33 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:21.999 22:03:33 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:21.999 22:03:33 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:21.999 22:03:33 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:21.999 22:03:33 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:22.933 22:03:34 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:22.933 22:03:34 -- common/autotest_common.sh@1177 -- # local i=0 00:17:22.933 22:03:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:22.933 22:03:34 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:17:22.933 22:03:34 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:17:22.933 22:03:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:24.836 22:03:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:24.836 22:03:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:24.836 22:03:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:24.836 22:03:36 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:17:24.836 22:03:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:24.836 22:03:36 -- common/autotest_common.sh@1187 -- # return 0 00:17:24.836 22:03:36 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:24.836 22:03:36 -- nvmf/common.sh@510 -- # local dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@509 -- # nvme list 00:17:24.836 22:03:36 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:24.836 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:24.836 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:24.836 22:03:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:24.836 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:24.836 22:03:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:24.836 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:24.836 22:03:36 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:17:24.836 /dev/nvme0n1 ]] 00:17:24.836 22:03:36 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:24.836 22:03:36 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:24.836 22:03:36 -- nvmf/common.sh@510 -- # local dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:24.836 22:03:36 -- nvmf/common.sh@509 -- # nvme list 00:17:25.095 22:03:36 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:25.095 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.095 22:03:36 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:25.095 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.095 22:03:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:25.095 22:03:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:25.095 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.095 22:03:36 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:25.095 22:03:36 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:25.095 22:03:36 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:25.095 22:03:36 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:25.095 22:03:36 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.076 22:03:37 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.076 22:03:37 -- common/autotest_common.sh@1198 -- # local i=0 00:17:26.076 22:03:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:26.076 22:03:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.076 22:03:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:26.076 22:03:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.076 22:03:37 -- common/autotest_common.sh@1210 -- # return 0 00:17:26.076 22:03:37 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:26.076 22:03:37 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.076 22:03:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.076 22:03:37 -- common/autotest_common.sh@10 -- # set +x 00:17:26.076 22:03:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.076 22:03:37 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:26.076 22:03:37 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:26.076 22:03:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:26.076 22:03:37 -- nvmf/common.sh@116 -- # sync 00:17:26.076 22:03:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:26.076 22:03:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:26.076 22:03:37 -- nvmf/common.sh@119 -- # set +e 00:17:26.076 22:03:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:26.076 22:03:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:26.076 rmmod nvme_rdma 00:17:26.076 rmmod nvme_fabrics 00:17:26.076 22:03:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:26.076 22:03:37 -- nvmf/common.sh@123 -- # set -e 00:17:26.076 22:03:37 -- nvmf/common.sh@124 -- # return 0 00:17:26.076 22:03:37 -- nvmf/common.sh@477 -- # '[' -n 2166753 ']' 00:17:26.076 22:03:37 -- nvmf/common.sh@478 -- # killprocess 2166753 00:17:26.076 22:03:37 -- common/autotest_common.sh@926 -- # '[' -z 2166753 ']' 00:17:26.076 22:03:37 -- common/autotest_common.sh@930 -- # kill -0 2166753 00:17:26.076 22:03:37 -- common/autotest_common.sh@931 -- # uname 00:17:26.076 22:03:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.076 22:03:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2166753 00:17:26.076 22:03:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:26.076 22:03:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:26.076 22:03:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2166753' 00:17:26.076 killing process with pid 2166753 00:17:26.076 22:03:37 -- common/autotest_common.sh@945 -- # kill 2166753 00:17:26.076 22:03:37 -- common/autotest_common.sh@950 -- # wait 2166753 00:17:26.354 22:03:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:26.354 22:03:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:26.354 00:17:26.354 real 0m13.686s 00:17:26.354 user 0m24.184s 00:17:26.354 sys 0m6.633s 00:17:26.354 22:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.354 22:03:37 -- common/autotest_common.sh@10 -- # set +x 00:17:26.354 ************************************ 00:17:26.354 END TEST nvmf_nvme_cli 00:17:26.354 ************************************ 00:17:26.354 22:03:37 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:26.354 22:03:37 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:26.354 22:03:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:26.354 22:03:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:26.354 22:03:37 -- common/autotest_common.sh@10 -- # set +x 00:17:26.354 ************************************ 00:17:26.354 START TEST nvmf_host_management 00:17:26.354 ************************************ 00:17:26.354 22:03:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:26.613 * Looking for test storage... 00:17:26.613 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:26.613 22:03:37 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.613 22:03:37 -- nvmf/common.sh@7 -- # uname -s 00:17:26.613 22:03:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.613 22:03:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.613 22:03:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.613 22:03:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.613 22:03:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.613 22:03:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.613 22:03:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.613 22:03:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.613 22:03:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.613 22:03:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.613 22:03:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:26.613 22:03:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:26.613 22:03:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.613 22:03:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.613 22:03:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.613 22:03:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:26.613 22:03:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.613 22:03:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.613 22:03:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.613 22:03:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.613 22:03:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.613 22:03:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.613 22:03:37 -- paths/export.sh@5 -- # export PATH 00:17:26.613 22:03:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.613 22:03:37 -- nvmf/common.sh@46 -- # : 0 00:17:26.613 22:03:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:26.613 22:03:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:26.613 22:03:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:26.613 22:03:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.613 22:03:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.613 22:03:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:26.613 22:03:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:26.613 22:03:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:26.613 22:03:37 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.613 22:03:37 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.613 22:03:37 -- target/host_management.sh@104 -- # nvmftestinit 00:17:26.613 22:03:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:26.613 22:03:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.613 22:03:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:26.613 22:03:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:26.613 22:03:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:26.613 22:03:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.613 22:03:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.613 22:03:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.613 22:03:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:26.613 22:03:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:26.613 22:03:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:26.613 22:03:37 -- common/autotest_common.sh@10 -- # set +x 00:17:34.734 22:03:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:34.734 22:03:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:34.734 22:03:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:34.734 22:03:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:34.734 22:03:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:34.734 22:03:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:34.734 22:03:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:34.734 22:03:45 -- nvmf/common.sh@294 -- # net_devs=() 00:17:34.734 22:03:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:34.734 22:03:45 -- nvmf/common.sh@295 -- # e810=() 00:17:34.734 22:03:45 -- nvmf/common.sh@295 -- # local -ga e810 00:17:34.734 22:03:45 -- nvmf/common.sh@296 -- # x722=() 00:17:34.734 22:03:45 -- nvmf/common.sh@296 -- # local -ga x722 00:17:34.734 22:03:45 -- nvmf/common.sh@297 -- # mlx=() 00:17:34.734 22:03:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:34.734 22:03:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.734 22:03:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:34.734 22:03:45 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:34.734 22:03:45 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:34.734 22:03:45 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:34.734 22:03:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:34.734 22:03:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:34.734 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:34.734 22:03:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:34.734 22:03:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:34.734 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:34.734 22:03:45 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:34.734 22:03:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:34.734 22:03:45 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.734 22:03:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:34.734 22:03:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.734 22:03:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:34.734 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:34.734 22:03:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.734 22:03:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.734 22:03:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:34.734 22:03:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.734 22:03:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:34.734 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:34.734 22:03:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.734 22:03:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:34.734 22:03:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:34.734 22:03:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:34.734 22:03:45 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:34.734 22:03:45 -- nvmf/common.sh@57 -- # uname 00:17:34.734 22:03:45 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:34.734 22:03:45 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:34.734 22:03:45 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:34.734 22:03:45 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:34.734 22:03:45 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:34.734 22:03:45 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:34.734 22:03:45 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:34.734 22:03:45 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:34.734 22:03:45 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:34.734 22:03:45 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:34.734 22:03:45 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:34.734 22:03:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:34.734 22:03:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:34.734 22:03:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:34.734 22:03:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:34.734 22:03:45 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:34.734 22:03:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:34.734 22:03:45 -- nvmf/common.sh@104 -- # continue 2 00:17:34.734 22:03:45 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.734 22:03:45 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:34.734 22:03:45 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:34.734 22:03:45 -- nvmf/common.sh@104 -- # continue 2 00:17:34.734 22:03:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:34.734 22:03:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:34.734 22:03:45 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:34.734 22:03:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:34.734 22:03:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.735 22:03:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.735 22:03:45 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:34.735 22:03:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:34.735 22:03:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:34.735 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:34.735 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:34.735 altname enp217s0f0np0 00:17:34.735 altname ens818f0np0 00:17:34.735 inet 192.168.100.8/24 scope global mlx_0_0 00:17:34.735 valid_lft forever preferred_lft forever 00:17:34.735 22:03:45 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:34.735 22:03:45 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:34.735 22:03:45 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:34.735 22:03:45 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:34.735 22:03:45 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.735 22:03:45 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.994 22:03:45 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:34.994 22:03:45 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:34.994 22:03:45 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:34.995 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:34.995 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:34.995 altname enp217s0f1np1 00:17:34.995 altname ens818f1np1 00:17:34.995 inet 192.168.100.9/24 scope global mlx_0_1 00:17:34.995 valid_lft forever preferred_lft forever 00:17:34.995 22:03:45 -- nvmf/common.sh@410 -- # return 0 00:17:34.995 22:03:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:34.995 22:03:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:34.995 22:03:45 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:34.995 22:03:45 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:34.995 22:03:45 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:34.995 22:03:45 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:34.995 22:03:45 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:34.995 22:03:45 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:34.995 22:03:45 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:34.995 22:03:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:34.995 22:03:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.995 22:03:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.995 22:03:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:34.995 22:03:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:34.995 22:03:46 -- nvmf/common.sh@104 -- # continue 2 00:17:34.995 22:03:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:34.995 22:03:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.995 22:03:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:34.995 22:03:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:34.995 22:03:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:34.995 22:03:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:34.995 22:03:46 -- nvmf/common.sh@104 -- # continue 2 00:17:34.995 22:03:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:34.995 22:03:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:34.995 22:03:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:34.995 22:03:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.995 22:03:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:34.995 22:03:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.995 22:03:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:34.995 22:03:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:34.995 22:03:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:34.995 22:03:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:34.995 22:03:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:34.995 22:03:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:34.995 22:03:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:34.995 192.168.100.9' 00:17:34.995 22:03:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:34.995 192.168.100.9' 00:17:34.995 22:03:46 -- nvmf/common.sh@445 -- # head -n 1 00:17:34.995 22:03:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:34.995 22:03:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:34.995 192.168.100.9' 00:17:34.995 22:03:46 -- nvmf/common.sh@446 -- # tail -n +2 00:17:34.995 22:03:46 -- nvmf/common.sh@446 -- # head -n 1 00:17:34.995 22:03:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:34.995 22:03:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:34.995 22:03:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:34.995 22:03:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:34.995 22:03:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:34.995 22:03:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:34.995 22:03:46 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:34.995 22:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:34.995 22:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:34.995 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.995 ************************************ 00:17:34.995 START TEST nvmf_host_management 00:17:34.995 ************************************ 00:17:34.995 22:03:46 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:17:34.995 22:03:46 -- target/host_management.sh@69 -- # starttarget 00:17:34.995 22:03:46 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:34.995 22:03:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:34.995 22:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:34.995 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.995 22:03:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:34.995 22:03:46 -- nvmf/common.sh@469 -- # nvmfpid=2171808 00:17:34.995 22:03:46 -- nvmf/common.sh@470 -- # waitforlisten 2171808 00:17:34.995 22:03:46 -- common/autotest_common.sh@819 -- # '[' -z 2171808 ']' 00:17:34.995 22:03:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.995 22:03:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:34.995 22:03:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.995 22:03:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:34.995 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:17:34.995 [2024-07-26 22:03:46.119552] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:34.995 [2024-07-26 22:03:46.119601] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.995 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.995 [2024-07-26 22:03:46.202323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.254 [2024-07-26 22:03:46.240224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.254 [2024-07-26 22:03:46.240325] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.254 [2024-07-26 22:03:46.240335] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.254 [2024-07-26 22:03:46.240345] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.254 [2024-07-26 22:03:46.240443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.254 [2024-07-26 22:03:46.240527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.254 [2024-07-26 22:03:46.240655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.254 [2024-07-26 22:03:46.240656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:35.823 22:03:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:35.823 22:03:46 -- common/autotest_common.sh@852 -- # return 0 00:17:35.823 22:03:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:35.823 22:03:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:35.823 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:17:35.823 22:03:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.823 22:03:46 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:35.823 22:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:35.823 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:17:35.823 [2024-07-26 22:03:47.020624] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc897a0/0xc8dc90) succeed. 00:17:35.823 [2024-07-26 22:03:47.031018] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc8ad90/0xccf320) succeed. 00:17:36.082 22:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.082 22:03:47 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:36.082 22:03:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:36.082 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.082 22:03:47 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:36.082 22:03:47 -- target/host_management.sh@23 -- # cat 00:17:36.082 22:03:47 -- target/host_management.sh@30 -- # rpc_cmd 00:17:36.082 22:03:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:36.082 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.082 Malloc0 00:17:36.082 [2024-07-26 22:03:47.210322] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:36.082 22:03:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:36.082 22:03:47 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:36.082 22:03:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:36.082 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.082 22:03:47 -- target/host_management.sh@73 -- # perfpid=2172119 00:17:36.082 22:03:47 -- target/host_management.sh@74 -- # waitforlisten 2172119 /var/tmp/bdevperf.sock 00:17:36.082 22:03:47 -- common/autotest_common.sh@819 -- # '[' -z 2172119 ']' 00:17:36.082 22:03:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.082 22:03:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:36.082 22:03:47 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:36.082 22:03:47 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:36.082 22:03:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.082 22:03:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:36.082 22:03:47 -- nvmf/common.sh@520 -- # config=() 00:17:36.082 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:17:36.082 22:03:47 -- nvmf/common.sh@520 -- # local subsystem config 00:17:36.082 22:03:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:36.082 22:03:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:36.082 { 00:17:36.082 "params": { 00:17:36.082 "name": "Nvme$subsystem", 00:17:36.082 "trtype": "$TEST_TRANSPORT", 00:17:36.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:36.082 "adrfam": "ipv4", 00:17:36.082 "trsvcid": "$NVMF_PORT", 00:17:36.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:36.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:36.082 "hdgst": ${hdgst:-false}, 00:17:36.082 "ddgst": ${ddgst:-false} 00:17:36.082 }, 00:17:36.082 "method": "bdev_nvme_attach_controller" 00:17:36.082 } 00:17:36.082 EOF 00:17:36.082 )") 00:17:36.082 22:03:47 -- nvmf/common.sh@542 -- # cat 00:17:36.082 22:03:47 -- nvmf/common.sh@544 -- # jq . 00:17:36.082 22:03:47 -- nvmf/common.sh@545 -- # IFS=, 00:17:36.082 22:03:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:36.082 "params": { 00:17:36.082 "name": "Nvme0", 00:17:36.082 "trtype": "rdma", 00:17:36.082 "traddr": "192.168.100.8", 00:17:36.082 "adrfam": "ipv4", 00:17:36.082 "trsvcid": "4420", 00:17:36.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:36.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:36.082 "hdgst": false, 00:17:36.082 "ddgst": false 00:17:36.082 }, 00:17:36.082 "method": "bdev_nvme_attach_controller" 00:17:36.082 }' 00:17:36.341 [2024-07-26 22:03:47.313471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:36.341 [2024-07-26 22:03:47.313519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172119 ] 00:17:36.341 EAL: No free 2048 kB hugepages reported on node 1 00:17:36.341 [2024-07-26 22:03:47.398074] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.341 [2024-07-26 22:03:47.434348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.600 Running I/O for 10 seconds... 00:17:37.166 22:03:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:37.166 22:03:48 -- common/autotest_common.sh@852 -- # return 0 00:17:37.166 22:03:48 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:37.166 22:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:37.166 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:17:37.166 22:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.166 22:03:48 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:37.166 22:03:48 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:37.166 22:03:48 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:37.166 22:03:48 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:37.166 22:03:48 -- target/host_management.sh@52 -- # local ret=1 00:17:37.166 22:03:48 -- target/host_management.sh@53 -- # local i 00:17:37.166 22:03:48 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:37.166 22:03:48 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:37.166 22:03:48 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:37.166 22:03:48 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:37.166 22:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:37.166 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:17:37.166 22:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.166 22:03:48 -- target/host_management.sh@55 -- # read_io_count=2937 00:17:37.166 22:03:48 -- target/host_management.sh@58 -- # '[' 2937 -ge 100 ']' 00:17:37.166 22:03:48 -- target/host_management.sh@59 -- # ret=0 00:17:37.166 22:03:48 -- target/host_management.sh@60 -- # break 00:17:37.166 22:03:48 -- target/host_management.sh@64 -- # return 0 00:17:37.166 22:03:48 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:37.166 22:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:37.166 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:17:37.166 22:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.166 22:03:48 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:37.166 22:03:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:37.166 22:03:48 -- common/autotest_common.sh@10 -- # set +x 00:17:37.166 22:03:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:37.166 22:03:48 -- target/host_management.sh@87 -- # sleep 1 00:17:38.101 [2024-07-26 22:03:49.183870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182700 00:17:38.101 [2024-07-26 22:03:49.183905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.183925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.183935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.183947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.183956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.183967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.183976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.183988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.183998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:17:38.102 [2024-07-26 22:03:49.184018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.184042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:17:38.102 [2024-07-26 22:03:49.184084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:17:38.102 [2024-07-26 22:03:49.184168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.184231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.184276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182700 00:17:38.102 [2024-07-26 22:03:49.184298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:17:38.102 [2024-07-26 22:03:49.184319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.184341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182000 00:17:38.102 [2024-07-26 22:03:49.184421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.184441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:17:38.102 [2024-07-26 22:03:49.184481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:17:38.102 [2024-07-26 22:03:49.184501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:17:38.102 [2024-07-26 22:03:49.184542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:17:38.102 [2024-07-26 22:03:49.184601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182000 00:17:38.102 [2024-07-26 22:03:49.184621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:17:38.102 [2024-07-26 22:03:49.184646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.102 [2024-07-26 22:03:49.184657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:17:38.102 [2024-07-26 22:03:49.184666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:17:38.103 [2024-07-26 22:03:49.184686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182500 00:17:38.103 [2024-07-26 22:03:49.184706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:17:38.103 [2024-07-26 22:03:49.184725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba30000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c480000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c83d000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c690000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.184982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.184992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9c9000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:17:38.103 [2024-07-26 22:03:49.185072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:17:38.103 [2024-07-26 22:03:49.185092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:17:38.103 [2024-07-26 22:03:49.185112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9a8000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c966000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c945000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.185205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c924000 len:0x10000 key:0x182300 00:17:38.103 [2024-07-26 22:03:49.185214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:73e61000 sqhd:5310 p:0 m:0 dnr:0 00:17:38.103 [2024-07-26 22:03:49.187156] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:17:38.103 [2024-07-26 22:03:49.188034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:38.103 task offset: 19840 on job bdev=Nvme0n1 fails 00:17:38.103 00:17:38.103 Latency(us) 00:17:38.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.103 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:38.103 Job: Nvme0n1 ended in about 1.58 seconds with error 00:17:38.103 Verification LBA range: start 0x0 length 0x400 00:17:38.103 Nvme0n1 : 1.58 2026.56 126.66 40.46 0.00 30761.00 3460.30 1013343.85 00:17:38.103 =================================================================================================================== 00:17:38.103 Total : 2026.56 126.66 40.46 0.00 30761.00 3460.30 1013343.85 00:17:38.103 [2024-07-26 22:03:49.189680] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:38.103 22:03:49 -- target/host_management.sh@91 -- # kill -9 2172119 00:17:38.103 22:03:49 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:38.103 22:03:49 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:38.103 22:03:49 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:38.103 22:03:49 -- nvmf/common.sh@520 -- # config=() 00:17:38.103 22:03:49 -- nvmf/common.sh@520 -- # local subsystem config 00:17:38.103 22:03:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:38.103 22:03:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:38.103 { 00:17:38.103 "params": { 00:17:38.103 "name": "Nvme$subsystem", 00:17:38.103 "trtype": "$TEST_TRANSPORT", 00:17:38.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:38.103 "adrfam": "ipv4", 00:17:38.103 "trsvcid": "$NVMF_PORT", 00:17:38.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:38.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:38.103 "hdgst": ${hdgst:-false}, 00:17:38.103 "ddgst": ${ddgst:-false} 00:17:38.103 }, 00:17:38.103 "method": "bdev_nvme_attach_controller" 00:17:38.103 } 00:17:38.104 EOF 00:17:38.104 )") 00:17:38.104 22:03:49 -- nvmf/common.sh@542 -- # cat 00:17:38.104 22:03:49 -- nvmf/common.sh@544 -- # jq . 00:17:38.104 22:03:49 -- nvmf/common.sh@545 -- # IFS=, 00:17:38.104 22:03:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:38.104 "params": { 00:17:38.104 "name": "Nvme0", 00:17:38.104 "trtype": "rdma", 00:17:38.104 "traddr": "192.168.100.8", 00:17:38.104 "adrfam": "ipv4", 00:17:38.104 "trsvcid": "4420", 00:17:38.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:38.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:38.104 "hdgst": false, 00:17:38.104 "ddgst": false 00:17:38.104 }, 00:17:38.104 "method": "bdev_nvme_attach_controller" 00:17:38.104 }' 00:17:38.104 [2024-07-26 22:03:49.245770] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:38.104 [2024-07-26 22:03:49.245819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172399 ] 00:17:38.104 EAL: No free 2048 kB hugepages reported on node 1 00:17:38.362 [2024-07-26 22:03:49.330108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.362 [2024-07-26 22:03:49.366739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.362 Running I/O for 1 seconds... 00:17:39.739 00:17:39.739 Latency(us) 00:17:39.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.739 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:39.739 Verification LBA range: start 0x0 length 0x400 00:17:39.739 Nvme0n1 : 1.00 5557.19 347.32 0.00 0.00 11341.12 616.04 24956.11 00:17:39.739 =================================================================================================================== 00:17:39.739 Total : 5557.19 347.32 0.00 0.00 11341.12 616.04 24956.11 00:17:39.739 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2172119 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:17:39.739 22:03:50 -- target/host_management.sh@101 -- # stoptarget 00:17:39.739 22:03:50 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:39.739 22:03:50 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:39.739 22:03:50 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:39.739 22:03:50 -- target/host_management.sh@40 -- # nvmftestfini 00:17:39.739 22:03:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:39.739 22:03:50 -- nvmf/common.sh@116 -- # sync 00:17:39.739 22:03:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:39.739 22:03:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:39.739 22:03:50 -- nvmf/common.sh@119 -- # set +e 00:17:39.739 22:03:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:39.739 22:03:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:39.739 rmmod nvme_rdma 00:17:39.739 rmmod nvme_fabrics 00:17:39.739 22:03:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:39.739 22:03:50 -- nvmf/common.sh@123 -- # set -e 00:17:39.739 22:03:50 -- nvmf/common.sh@124 -- # return 0 00:17:39.739 22:03:50 -- nvmf/common.sh@477 -- # '[' -n 2171808 ']' 00:17:39.739 22:03:50 -- nvmf/common.sh@478 -- # killprocess 2171808 00:17:39.739 22:03:50 -- common/autotest_common.sh@926 -- # '[' -z 2171808 ']' 00:17:39.739 22:03:50 -- common/autotest_common.sh@930 -- # kill -0 2171808 00:17:39.739 22:03:50 -- common/autotest_common.sh@931 -- # uname 00:17:39.739 22:03:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:39.739 22:03:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2171808 00:17:39.739 22:03:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:39.739 22:03:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:39.739 22:03:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2171808' 00:17:39.739 killing process with pid 2171808 00:17:39.739 22:03:50 -- common/autotest_common.sh@945 -- # kill 2171808 00:17:39.739 22:03:50 -- common/autotest_common.sh@950 -- # wait 2171808 00:17:39.997 [2024-07-26 22:03:51.124211] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:39.997 22:03:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:39.997 22:03:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:39.997 00:17:39.997 real 0m5.060s 00:17:39.997 user 0m22.723s 00:17:39.997 sys 0m1.059s 00:17:39.997 22:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.997 22:03:51 -- common/autotest_common.sh@10 -- # set +x 00:17:39.997 ************************************ 00:17:39.997 END TEST nvmf_host_management 00:17:39.997 ************************************ 00:17:39.997 22:03:51 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:39.997 00:17:39.997 real 0m13.613s 00:17:39.997 user 0m25.154s 00:17:39.997 sys 0m7.453s 00:17:39.997 22:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.997 22:03:51 -- common/autotest_common.sh@10 -- # set +x 00:17:39.997 ************************************ 00:17:39.997 END TEST nvmf_host_management 00:17:39.997 ************************************ 00:17:40.256 22:03:51 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:40.256 22:03:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.256 22:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.256 22:03:51 -- common/autotest_common.sh@10 -- # set +x 00:17:40.256 ************************************ 00:17:40.256 START TEST nvmf_lvol 00:17:40.256 ************************************ 00:17:40.256 22:03:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:17:40.256 * Looking for test storage... 00:17:40.256 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:40.256 22:03:51 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.256 22:03:51 -- nvmf/common.sh@7 -- # uname -s 00:17:40.256 22:03:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.256 22:03:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.256 22:03:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.256 22:03:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.256 22:03:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.256 22:03:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.256 22:03:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.256 22:03:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.256 22:03:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.256 22:03:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.256 22:03:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:40.256 22:03:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:40.256 22:03:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.256 22:03:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.256 22:03:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.256 22:03:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:40.256 22:03:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.256 22:03:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.256 22:03:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.256 22:03:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.257 22:03:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.257 22:03:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.257 22:03:51 -- paths/export.sh@5 -- # export PATH 00:17:40.257 22:03:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.257 22:03:51 -- nvmf/common.sh@46 -- # : 0 00:17:40.257 22:03:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.257 22:03:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.257 22:03:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.257 22:03:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.257 22:03:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.257 22:03:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.257 22:03:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.257 22:03:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.257 22:03:51 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.257 22:03:51 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.257 22:03:51 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:40.257 22:03:51 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:40.257 22:03:51 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:40.257 22:03:51 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:40.257 22:03:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:40.257 22:03:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.257 22:03:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.257 22:03:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.257 22:03:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.257 22:03:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.257 22:03:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.257 22:03:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.257 22:03:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:40.257 22:03:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:40.257 22:03:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:40.257 22:03:51 -- common/autotest_common.sh@10 -- # set +x 00:17:48.379 22:03:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.379 22:03:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:48.379 22:03:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:48.379 22:03:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:48.379 22:03:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:48.379 22:03:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:48.379 22:03:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:48.379 22:03:58 -- nvmf/common.sh@294 -- # net_devs=() 00:17:48.379 22:03:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:48.379 22:03:58 -- nvmf/common.sh@295 -- # e810=() 00:17:48.379 22:03:58 -- nvmf/common.sh@295 -- # local -ga e810 00:17:48.379 22:03:58 -- nvmf/common.sh@296 -- # x722=() 00:17:48.379 22:03:58 -- nvmf/common.sh@296 -- # local -ga x722 00:17:48.379 22:03:58 -- nvmf/common.sh@297 -- # mlx=() 00:17:48.379 22:03:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:48.379 22:03:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.379 22:03:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:48.379 22:03:58 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:48.379 22:03:58 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:48.379 22:03:58 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:48.379 22:03:58 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:48.379 22:03:58 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:48.379 22:03:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:48.379 22:03:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:48.380 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:48.380 22:03:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:48.380 22:03:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:48.380 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:48.380 22:03:58 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:48.380 22:03:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:48.380 22:03:58 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.380 22:03:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.380 22:03:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.380 22:03:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:48.380 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:48.380 22:03:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.380 22:03:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.380 22:03:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.380 22:03:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.380 22:03:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:48.380 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:48.380 22:03:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.380 22:03:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:48.380 22:03:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:48.380 22:03:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:48.380 22:03:58 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:48.380 22:03:58 -- nvmf/common.sh@57 -- # uname 00:17:48.380 22:03:58 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:48.380 22:03:58 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:48.380 22:03:58 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:48.380 22:03:58 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:48.380 22:03:58 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:48.380 22:03:58 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:48.380 22:03:58 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:48.380 22:03:58 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:48.380 22:03:58 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:48.380 22:03:58 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:48.380 22:03:58 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:48.380 22:03:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:48.380 22:03:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:48.380 22:03:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:48.380 22:03:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:48.380 22:03:58 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:48.380 22:03:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:48.380 22:03:58 -- nvmf/common.sh@104 -- # continue 2 00:17:48.380 22:03:58 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.380 22:03:58 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:48.380 22:03:58 -- nvmf/common.sh@104 -- # continue 2 00:17:48.380 22:03:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:48.380 22:03:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:48.380 22:03:58 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:48.380 22:03:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:48.380 22:03:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:48.380 22:03:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:48.380 22:03:58 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:48.380 22:03:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:48.380 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:48.380 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:48.380 altname enp217s0f0np0 00:17:48.380 altname ens818f0np0 00:17:48.380 inet 192.168.100.8/24 scope global mlx_0_0 00:17:48.380 valid_lft forever preferred_lft forever 00:17:48.380 22:03:58 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:48.380 22:03:58 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:48.380 22:03:58 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:48.380 22:03:58 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:48.380 22:03:58 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:48.380 22:03:58 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:48.380 22:03:58 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:48.380 22:03:58 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:48.380 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:48.380 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:48.380 altname enp217s0f1np1 00:17:48.380 altname ens818f1np1 00:17:48.380 inet 192.168.100.9/24 scope global mlx_0_1 00:17:48.380 valid_lft forever preferred_lft forever 00:17:48.380 22:03:58 -- nvmf/common.sh@410 -- # return 0 00:17:48.380 22:03:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:48.380 22:03:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:48.380 22:03:58 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:48.380 22:03:58 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:48.380 22:03:58 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:48.380 22:03:58 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:48.380 22:03:58 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:48.380 22:03:58 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:48.380 22:03:58 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:48.380 22:03:59 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:48.380 22:03:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:48.380 22:03:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.380 22:03:59 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:48.380 22:03:59 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:48.380 22:03:59 -- nvmf/common.sh@104 -- # continue 2 00:17:48.380 22:03:59 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:48.380 22:03:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.380 22:03:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:48.380 22:03:59 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:48.380 22:03:59 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:48.380 22:03:59 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:48.380 22:03:59 -- nvmf/common.sh@104 -- # continue 2 00:17:48.380 22:03:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:48.380 22:03:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:48.380 22:03:59 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:48.380 22:03:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:48.380 22:03:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:48.380 22:03:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:48.380 22:03:59 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:48.380 22:03:59 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:48.380 22:03:59 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:48.380 22:03:59 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:48.380 22:03:59 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:48.380 22:03:59 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:48.380 22:03:59 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:48.380 192.168.100.9' 00:17:48.380 22:03:59 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:48.380 192.168.100.9' 00:17:48.380 22:03:59 -- nvmf/common.sh@445 -- # head -n 1 00:17:48.380 22:03:59 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:48.380 22:03:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:48.380 192.168.100.9' 00:17:48.380 22:03:59 -- nvmf/common.sh@446 -- # tail -n +2 00:17:48.380 22:03:59 -- nvmf/common.sh@446 -- # head -n 1 00:17:48.380 22:03:59 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:48.380 22:03:59 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:48.380 22:03:59 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:48.380 22:03:59 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:48.380 22:03:59 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:48.380 22:03:59 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:48.380 22:03:59 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:48.380 22:03:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:48.380 22:03:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:48.380 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:17:48.380 22:03:59 -- nvmf/common.sh@469 -- # nvmfpid=2176621 00:17:48.380 22:03:59 -- nvmf/common.sh@470 -- # waitforlisten 2176621 00:17:48.380 22:03:59 -- common/autotest_common.sh@819 -- # '[' -z 2176621 ']' 00:17:48.380 22:03:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.380 22:03:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.381 22:03:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.381 22:03:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.381 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:17:48.381 22:03:59 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:48.381 [2024-07-26 22:03:59.129448] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:48.381 [2024-07-26 22:03:59.129496] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.381 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.381 [2024-07-26 22:03:59.212826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:48.381 [2024-07-26 22:03:59.250896] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:48.381 [2024-07-26 22:03:59.251004] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.381 [2024-07-26 22:03:59.251014] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.381 [2024-07-26 22:03:59.251023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.381 [2024-07-26 22:03:59.251073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.381 [2024-07-26 22:03:59.251095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.381 [2024-07-26 22:03:59.251097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.950 22:03:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:48.950 22:03:59 -- common/autotest_common.sh@852 -- # return 0 00:17:48.950 22:03:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.950 22:03:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:48.950 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:17:48.950 22:03:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.950 22:03:59 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:48.950 [2024-07-26 22:04:00.129668] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x218f9d0/0x2193ec0) succeed. 00:17:48.950 [2024-07-26 22:04:00.139984] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2190f20/0x21d5550) succeed. 00:17:49.209 22:04:00 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.209 22:04:00 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:49.469 22:04:00 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.469 22:04:00 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:49.469 22:04:00 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:49.728 22:04:00 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:49.987 22:04:00 -- target/nvmf_lvol.sh@29 -- # lvs=6492aba9-1982-45c0-91ab-aa6e975a58d4 00:17:49.987 22:04:00 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6492aba9-1982-45c0-91ab-aa6e975a58d4 lvol 20 00:17:49.987 22:04:01 -- target/nvmf_lvol.sh@32 -- # lvol=3c88720e-4990-4ac5-8158-9a1bd275f4ee 00:17:49.987 22:04:01 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:50.246 22:04:01 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c88720e-4990-4ac5-8158-9a1bd275f4ee 00:17:50.505 22:04:01 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:50.505 [2024-07-26 22:04:01.635522] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:50.505 22:04:01 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:50.764 22:04:01 -- target/nvmf_lvol.sh@42 -- # perf_pid=2177188 00:17:50.764 22:04:01 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:50.764 22:04:01 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:50.764 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.701 22:04:02 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3c88720e-4990-4ac5-8158-9a1bd275f4ee MY_SNAPSHOT 00:17:51.961 22:04:03 -- target/nvmf_lvol.sh@47 -- # snapshot=cdff1667-6997-479f-95d8-63c9b935fe89 00:17:51.961 22:04:03 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3c88720e-4990-4ac5-8158-9a1bd275f4ee 30 00:17:52.224 22:04:03 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone cdff1667-6997-479f-95d8-63c9b935fe89 MY_CLONE 00:17:52.224 22:04:03 -- target/nvmf_lvol.sh@49 -- # clone=801d0f67-16fa-4f0a-be6c-455cd62fe8a3 00:17:52.224 22:04:03 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 801d0f67-16fa-4f0a-be6c-455cd62fe8a3 00:17:52.521 22:04:03 -- target/nvmf_lvol.sh@53 -- # wait 2177188 00:18:02.498 Initializing NVMe Controllers 00:18:02.498 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:02.498 Controller IO queue size 128, less than required. 00:18:02.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.498 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:02.498 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:02.498 Initialization complete. Launching workers. 00:18:02.498 ======================================================== 00:18:02.498 Latency(us) 00:18:02.498 Device Information : IOPS MiB/s Average min max 00:18:02.498 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16774.00 65.52 7632.62 2036.90 35941.91 00:18:02.498 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16693.60 65.21 7669.38 3044.07 37460.38 00:18:02.498 ======================================================== 00:18:02.498 Total : 33467.60 130.73 7650.96 2036.90 37460.38 00:18:02.498 00:18:02.498 22:04:13 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:02.498 22:04:13 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3c88720e-4990-4ac5-8158-9a1bd275f4ee 00:18:02.498 22:04:13 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6492aba9-1982-45c0-91ab-aa6e975a58d4 00:18:02.756 22:04:13 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:02.756 22:04:13 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:02.756 22:04:13 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:02.756 22:04:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:02.756 22:04:13 -- nvmf/common.sh@116 -- # sync 00:18:02.756 22:04:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:02.756 22:04:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:02.756 22:04:13 -- nvmf/common.sh@119 -- # set +e 00:18:02.756 22:04:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:02.756 22:04:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:02.756 rmmod nvme_rdma 00:18:02.756 rmmod nvme_fabrics 00:18:02.756 22:04:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:02.756 22:04:13 -- nvmf/common.sh@123 -- # set -e 00:18:02.756 22:04:13 -- nvmf/common.sh@124 -- # return 0 00:18:02.756 22:04:13 -- nvmf/common.sh@477 -- # '[' -n 2176621 ']' 00:18:02.756 22:04:13 -- nvmf/common.sh@478 -- # killprocess 2176621 00:18:02.756 22:04:13 -- common/autotest_common.sh@926 -- # '[' -z 2176621 ']' 00:18:02.756 22:04:13 -- common/autotest_common.sh@930 -- # kill -0 2176621 00:18:02.756 22:04:13 -- common/autotest_common.sh@931 -- # uname 00:18:02.756 22:04:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:02.756 22:04:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2176621 00:18:02.756 22:04:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:02.756 22:04:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:02.756 22:04:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2176621' 00:18:02.756 killing process with pid 2176621 00:18:02.756 22:04:13 -- common/autotest_common.sh@945 -- # kill 2176621 00:18:02.756 22:04:13 -- common/autotest_common.sh@950 -- # wait 2176621 00:18:03.016 22:04:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:03.016 22:04:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:03.016 00:18:03.016 real 0m22.881s 00:18:03.016 user 1m10.907s 00:18:03.016 sys 0m7.057s 00:18:03.016 22:04:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:03.016 22:04:14 -- common/autotest_common.sh@10 -- # set +x 00:18:03.016 ************************************ 00:18:03.016 END TEST nvmf_lvol 00:18:03.016 ************************************ 00:18:03.016 22:04:14 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:03.016 22:04:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:03.016 22:04:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:03.016 22:04:14 -- common/autotest_common.sh@10 -- # set +x 00:18:03.016 ************************************ 00:18:03.016 START TEST nvmf_lvs_grow 00:18:03.016 ************************************ 00:18:03.016 22:04:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:03.275 * Looking for test storage... 00:18:03.275 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:03.275 22:04:14 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:03.275 22:04:14 -- nvmf/common.sh@7 -- # uname -s 00:18:03.275 22:04:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.275 22:04:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.275 22:04:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.275 22:04:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.275 22:04:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.275 22:04:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.275 22:04:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.275 22:04:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.275 22:04:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.275 22:04:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.275 22:04:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:03.275 22:04:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:03.275 22:04:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.275 22:04:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.275 22:04:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.275 22:04:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:03.275 22:04:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.275 22:04:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.275 22:04:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.275 22:04:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.276 22:04:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.276 22:04:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.276 22:04:14 -- paths/export.sh@5 -- # export PATH 00:18:03.276 22:04:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.276 22:04:14 -- nvmf/common.sh@46 -- # : 0 00:18:03.276 22:04:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:03.276 22:04:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:03.276 22:04:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:03.276 22:04:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.276 22:04:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.276 22:04:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:03.276 22:04:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:03.276 22:04:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:03.276 22:04:14 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:03.276 22:04:14 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.276 22:04:14 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:03.276 22:04:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:03.276 22:04:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.276 22:04:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:03.276 22:04:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:03.276 22:04:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:03.276 22:04:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.276 22:04:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.276 22:04:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.276 22:04:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:03.276 22:04:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:03.276 22:04:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:03.276 22:04:14 -- common/autotest_common.sh@10 -- # set +x 00:18:11.396 22:04:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:11.396 22:04:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:11.396 22:04:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:11.396 22:04:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:11.396 22:04:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:11.396 22:04:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:11.396 22:04:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:11.396 22:04:21 -- nvmf/common.sh@294 -- # net_devs=() 00:18:11.396 22:04:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:11.396 22:04:21 -- nvmf/common.sh@295 -- # e810=() 00:18:11.396 22:04:21 -- nvmf/common.sh@295 -- # local -ga e810 00:18:11.396 22:04:21 -- nvmf/common.sh@296 -- # x722=() 00:18:11.396 22:04:21 -- nvmf/common.sh@296 -- # local -ga x722 00:18:11.396 22:04:21 -- nvmf/common.sh@297 -- # mlx=() 00:18:11.396 22:04:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:11.396 22:04:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.396 22:04:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:11.396 22:04:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:11.396 22:04:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:11.396 22:04:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:11.396 22:04:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:11.396 22:04:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.396 22:04:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:11.396 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:11.396 22:04:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:11.396 22:04:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.396 22:04:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:11.396 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:11.396 22:04:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:11.396 22:04:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:11.396 22:04:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:11.396 22:04:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.396 22:04:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.396 22:04:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.396 22:04:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.396 22:04:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:11.396 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:11.396 22:04:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.396 22:04:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.397 22:04:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.397 22:04:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.397 22:04:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:11.397 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.397 22:04:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:11.397 22:04:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:11.397 22:04:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:11.397 22:04:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:11.397 22:04:21 -- nvmf/common.sh@57 -- # uname 00:18:11.397 22:04:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:11.397 22:04:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:11.397 22:04:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:11.397 22:04:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:11.397 22:04:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:11.397 22:04:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:11.397 22:04:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:11.397 22:04:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:11.397 22:04:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:11.397 22:04:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:11.397 22:04:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:11.397 22:04:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:11.397 22:04:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:11.397 22:04:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:11.397 22:04:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:11.397 22:04:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:11.397 22:04:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@104 -- # continue 2 00:18:11.397 22:04:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@104 -- # continue 2 00:18:11.397 22:04:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:11.397 22:04:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:11.397 22:04:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:11.397 22:04:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:11.397 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:11.397 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:11.397 altname enp217s0f0np0 00:18:11.397 altname ens818f0np0 00:18:11.397 inet 192.168.100.8/24 scope global mlx_0_0 00:18:11.397 valid_lft forever preferred_lft forever 00:18:11.397 22:04:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:11.397 22:04:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:11.397 22:04:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:11.397 22:04:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:11.397 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:11.397 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:11.397 altname enp217s0f1np1 00:18:11.397 altname ens818f1np1 00:18:11.397 inet 192.168.100.9/24 scope global mlx_0_1 00:18:11.397 valid_lft forever preferred_lft forever 00:18:11.397 22:04:21 -- nvmf/common.sh@410 -- # return 0 00:18:11.397 22:04:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.397 22:04:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:11.397 22:04:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:11.397 22:04:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:11.397 22:04:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:11.397 22:04:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:11.397 22:04:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:11.397 22:04:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:11.397 22:04:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:11.397 22:04:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@104 -- # continue 2 00:18:11.397 22:04:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:11.397 22:04:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:11.397 22:04:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@104 -- # continue 2 00:18:11.397 22:04:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:11.397 22:04:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:11.397 22:04:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:11.397 22:04:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:11.397 22:04:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:11.397 22:04:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:11.397 192.168.100.9' 00:18:11.397 22:04:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:11.397 192.168.100.9' 00:18:11.397 22:04:21 -- nvmf/common.sh@445 -- # head -n 1 00:18:11.397 22:04:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:11.397 22:04:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:11.397 192.168.100.9' 00:18:11.397 22:04:21 -- nvmf/common.sh@446 -- # tail -n +2 00:18:11.397 22:04:21 -- nvmf/common.sh@446 -- # head -n 1 00:18:11.397 22:04:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:11.397 22:04:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:11.397 22:04:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:11.397 22:04:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:11.397 22:04:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:11.397 22:04:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:11.397 22:04:21 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:11.397 22:04:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:11.397 22:04:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:11.397 22:04:21 -- common/autotest_common.sh@10 -- # set +x 00:18:11.397 22:04:21 -- nvmf/common.sh@469 -- # nvmfpid=2183232 00:18:11.397 22:04:21 -- nvmf/common.sh@470 -- # waitforlisten 2183232 00:18:11.397 22:04:21 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:11.397 22:04:21 -- common/autotest_common.sh@819 -- # '[' -z 2183232 ']' 00:18:11.397 22:04:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.397 22:04:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.397 22:04:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.397 22:04:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.397 22:04:21 -- common/autotest_common.sh@10 -- # set +x 00:18:11.397 [2024-07-26 22:04:21.933446] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:11.397 [2024-07-26 22:04:21.933501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.397 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.397 [2024-07-26 22:04:22.018938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.397 [2024-07-26 22:04:22.056375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.397 [2024-07-26 22:04:22.056478] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.397 [2024-07-26 22:04:22.056488] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.397 [2024-07-26 22:04:22.056497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.397 [2024-07-26 22:04:22.056516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.656 22:04:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.656 22:04:22 -- common/autotest_common.sh@852 -- # return 0 00:18:11.656 22:04:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.656 22:04:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:11.656 22:04:22 -- common/autotest_common.sh@10 -- # set +x 00:18:11.656 22:04:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.656 22:04:22 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:11.916 [2024-07-26 22:04:22.926792] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd91320/0xd95810) succeed. 00:18:11.916 [2024-07-26 22:04:22.935300] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd92820/0xdd6ea0) succeed. 00:18:11.916 22:04:22 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:11.916 22:04:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:11.916 22:04:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:11.916 22:04:22 -- common/autotest_common.sh@10 -- # set +x 00:18:11.916 ************************************ 00:18:11.916 START TEST lvs_grow_clean 00:18:11.916 ************************************ 00:18:11.916 22:04:23 -- common/autotest_common.sh@1104 -- # lvs_grow 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:11.916 22:04:23 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:12.175 22:04:23 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:12.175 22:04:23 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:12.175 22:04:23 -- target/nvmf_lvs_grow.sh@28 -- # lvs=b04ec263-a959-4144-8232-f51c41971f56 00:18:12.175 22:04:23 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:12.175 22:04:23 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:12.433 22:04:23 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:12.433 22:04:23 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:12.433 22:04:23 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b04ec263-a959-4144-8232-f51c41971f56 lvol 150 00:18:12.691 22:04:23 -- target/nvmf_lvs_grow.sh@33 -- # lvol=324cc07a-0bac-4d47-88bf-fb06252542ad 00:18:12.691 22:04:23 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:12.691 22:04:23 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:12.691 [2024-07-26 22:04:23.814806] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:12.691 [2024-07-26 22:04:23.814854] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:12.691 true 00:18:12.691 22:04:23 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:12.691 22:04:23 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:12.949 22:04:23 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:12.949 22:04:23 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:12.949 22:04:24 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 324cc07a-0bac-4d47-88bf-fb06252542ad 00:18:13.207 22:04:24 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:13.466 [2024-07-26 22:04:24.440888] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:13.466 22:04:24 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:13.466 22:04:24 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2183604 00:18:13.466 22:04:24 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.466 22:04:24 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:13.466 22:04:24 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2183604 /var/tmp/bdevperf.sock 00:18:13.466 22:04:24 -- common/autotest_common.sh@819 -- # '[' -z 2183604 ']' 00:18:13.466 22:04:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.466 22:04:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:13.466 22:04:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.466 22:04:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:13.466 22:04:24 -- common/autotest_common.sh@10 -- # set +x 00:18:13.466 [2024-07-26 22:04:24.646620] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:13.466 [2024-07-26 22:04:24.646682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183604 ] 00:18:13.466 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.724 [2024-07-26 22:04:24.729727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.724 [2024-07-26 22:04:24.766548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.292 22:04:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:14.292 22:04:25 -- common/autotest_common.sh@852 -- # return 0 00:18:14.292 22:04:25 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:14.551 Nvme0n1 00:18:14.551 22:04:25 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:14.810 [ 00:18:14.810 { 00:18:14.810 "name": "Nvme0n1", 00:18:14.810 "aliases": [ 00:18:14.810 "324cc07a-0bac-4d47-88bf-fb06252542ad" 00:18:14.810 ], 00:18:14.810 "product_name": "NVMe disk", 00:18:14.810 "block_size": 4096, 00:18:14.810 "num_blocks": 38912, 00:18:14.810 "uuid": "324cc07a-0bac-4d47-88bf-fb06252542ad", 00:18:14.810 "assigned_rate_limits": { 00:18:14.810 "rw_ios_per_sec": 0, 00:18:14.810 "rw_mbytes_per_sec": 0, 00:18:14.810 "r_mbytes_per_sec": 0, 00:18:14.810 "w_mbytes_per_sec": 0 00:18:14.810 }, 00:18:14.810 "claimed": false, 00:18:14.810 "zoned": false, 00:18:14.810 "supported_io_types": { 00:18:14.810 "read": true, 00:18:14.810 "write": true, 00:18:14.810 "unmap": true, 00:18:14.810 "write_zeroes": true, 00:18:14.810 "flush": true, 00:18:14.810 "reset": true, 00:18:14.810 "compare": true, 00:18:14.810 "compare_and_write": true, 00:18:14.810 "abort": true, 00:18:14.810 "nvme_admin": true, 00:18:14.810 "nvme_io": true 00:18:14.810 }, 00:18:14.810 "memory_domains": [ 00:18:14.810 { 00:18:14.810 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:14.810 "dma_device_type": 0 00:18:14.810 } 00:18:14.810 ], 00:18:14.810 "driver_specific": { 00:18:14.810 "nvme": [ 00:18:14.810 { 00:18:14.810 "trid": { 00:18:14.810 "trtype": "RDMA", 00:18:14.810 "adrfam": "IPv4", 00:18:14.810 "traddr": "192.168.100.8", 00:18:14.810 "trsvcid": "4420", 00:18:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:14.810 }, 00:18:14.810 "ctrlr_data": { 00:18:14.810 "cntlid": 1, 00:18:14.810 "vendor_id": "0x8086", 00:18:14.810 "model_number": "SPDK bdev Controller", 00:18:14.810 "serial_number": "SPDK0", 00:18:14.810 "firmware_revision": "24.01.1", 00:18:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:14.810 "oacs": { 00:18:14.810 "security": 0, 00:18:14.810 "format": 0, 00:18:14.810 "firmware": 0, 00:18:14.810 "ns_manage": 0 00:18:14.810 }, 00:18:14.810 "multi_ctrlr": true, 00:18:14.810 "ana_reporting": false 00:18:14.810 }, 00:18:14.810 "vs": { 00:18:14.810 "nvme_version": "1.3" 00:18:14.810 }, 00:18:14.810 "ns_data": { 00:18:14.810 "id": 1, 00:18:14.810 "can_share": true 00:18:14.810 } 00:18:14.810 } 00:18:14.810 ], 00:18:14.810 "mp_policy": "active_passive" 00:18:14.810 } 00:18:14.810 } 00:18:14.810 ] 00:18:14.810 22:04:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2183863 00:18:14.810 22:04:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:14.810 22:04:25 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.810 Running I/O for 10 seconds... 00:18:16.189 Latency(us) 00:18:16.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:16.189 Nvme0n1 : 1.00 36515.00 142.64 0.00 0.00 0.00 0.00 0.00 00:18:16.189 =================================================================================================================== 00:18:16.189 Total : 36515.00 142.64 0.00 0.00 0.00 0.00 0.00 00:18:16.189 00:18:16.757 22:04:27 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b04ec263-a959-4144-8232-f51c41971f56 00:18:17.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.017 Nvme0n1 : 2.00 36977.50 144.44 0.00 0.00 0.00 0.00 0.00 00:18:17.017 =================================================================================================================== 00:18:17.017 Total : 36977.50 144.44 0.00 0.00 0.00 0.00 0.00 00:18:17.017 00:18:17.017 true 00:18:17.017 22:04:28 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:17.017 22:04:28 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:17.017 22:04:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:17.017 22:04:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:17.017 22:04:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 2183863 00:18:17.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:17.954 Nvme0n1 : 3.00 37144.33 145.10 0.00 0.00 0.00 0.00 0.00 00:18:17.954 =================================================================================================================== 00:18:17.954 Total : 37144.33 145.10 0.00 0.00 0.00 0.00 0.00 00:18:17.954 00:18:18.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:18.890 Nvme0n1 : 4.00 37281.75 145.63 0.00 0.00 0.00 0.00 0.00 00:18:18.890 =================================================================================================================== 00:18:18.890 Total : 37281.75 145.63 0.00 0.00 0.00 0.00 0.00 00:18:18.890 00:18:19.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.827 Nvme0n1 : 5.00 37204.40 145.33 0.00 0.00 0.00 0.00 0.00 00:18:19.827 =================================================================================================================== 00:18:19.827 Total : 37204.40 145.33 0.00 0.00 0.00 0.00 0.00 00:18:19.827 00:18:21.276 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.276 Nvme0n1 : 6.00 37289.83 145.66 0.00 0.00 0.00 0.00 0.00 00:18:21.276 =================================================================================================================== 00:18:21.276 Total : 37289.83 145.66 0.00 0.00 0.00 0.00 0.00 00:18:21.276 00:18:21.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:21.844 Nvme0n1 : 7.00 37356.86 145.93 0.00 0.00 0.00 0.00 0.00 00:18:21.844 =================================================================================================================== 00:18:21.844 Total : 37356.86 145.93 0.00 0.00 0.00 0.00 0.00 00:18:21.844 00:18:22.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:22.777 Nvme0n1 : 8.00 37419.50 146.17 0.00 0.00 0.00 0.00 0.00 00:18:22.777 =================================================================================================================== 00:18:22.777 Total : 37419.50 146.17 0.00 0.00 0.00 0.00 0.00 00:18:22.777 00:18:24.151 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:24.151 Nvme0n1 : 9.00 37460.89 146.33 0.00 0.00 0.00 0.00 0.00 00:18:24.151 =================================================================================================================== 00:18:24.151 Total : 37460.89 146.33 0.00 0.00 0.00 0.00 0.00 00:18:24.151 00:18:25.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.087 Nvme0n1 : 10.00 37493.90 146.46 0.00 0.00 0.00 0.00 0.00 00:18:25.087 =================================================================================================================== 00:18:25.087 Total : 37493.90 146.46 0.00 0.00 0.00 0.00 0.00 00:18:25.087 00:18:25.087 00:18:25.087 Latency(us) 00:18:25.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:25.087 Nvme0n1 : 10.00 37493.66 146.46 0.00 0.00 3411.67 2267.55 16043.21 00:18:25.087 =================================================================================================================== 00:18:25.087 Total : 37493.66 146.46 0.00 0.00 3411.67 2267.55 16043.21 00:18:25.087 0 00:18:25.087 22:04:36 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2183604 00:18:25.087 22:04:36 -- common/autotest_common.sh@926 -- # '[' -z 2183604 ']' 00:18:25.087 22:04:36 -- common/autotest_common.sh@930 -- # kill -0 2183604 00:18:25.087 22:04:36 -- common/autotest_common.sh@931 -- # uname 00:18:25.087 22:04:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:25.087 22:04:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2183604 00:18:25.087 22:04:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:25.087 22:04:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:25.087 22:04:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2183604' 00:18:25.087 killing process with pid 2183604 00:18:25.087 22:04:36 -- common/autotest_common.sh@945 -- # kill 2183604 00:18:25.087 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.087 00:18:25.087 Latency(us) 00:18:25.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.087 =================================================================================================================== 00:18:25.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.087 22:04:36 -- common/autotest_common.sh@950 -- # wait 2183604 00:18:25.087 22:04:36 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:25.346 22:04:36 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:25.346 22:04:36 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:25.604 22:04:36 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:25.604 22:04:36 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:25.604 22:04:36 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:25.605 [2024-07-26 22:04:36.771920] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:25.605 22:04:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:25.605 22:04:36 -- common/autotest_common.sh@640 -- # local es=0 00:18:25.605 22:04:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:25.605 22:04:36 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.605 22:04:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:25.605 22:04:36 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.605 22:04:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:25.605 22:04:36 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.605 22:04:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:25.605 22:04:36 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.605 22:04:36 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:25.605 22:04:36 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:25.864 request: 00:18:25.864 { 00:18:25.864 "uuid": "b04ec263-a959-4144-8232-f51c41971f56", 00:18:25.864 "method": "bdev_lvol_get_lvstores", 00:18:25.864 "req_id": 1 00:18:25.864 } 00:18:25.864 Got JSON-RPC error response 00:18:25.864 response: 00:18:25.864 { 00:18:25.864 "code": -19, 00:18:25.864 "message": "No such device" 00:18:25.864 } 00:18:25.864 22:04:36 -- common/autotest_common.sh@643 -- # es=1 00:18:25.864 22:04:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:25.864 22:04:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:25.864 22:04:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:25.864 22:04:36 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:26.123 aio_bdev 00:18:26.123 22:04:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 324cc07a-0bac-4d47-88bf-fb06252542ad 00:18:26.123 22:04:37 -- common/autotest_common.sh@887 -- # local bdev_name=324cc07a-0bac-4d47-88bf-fb06252542ad 00:18:26.123 22:04:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.123 22:04:37 -- common/autotest_common.sh@889 -- # local i 00:18:26.123 22:04:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.123 22:04:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.123 22:04:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:26.123 22:04:37 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 324cc07a-0bac-4d47-88bf-fb06252542ad -t 2000 00:18:26.382 [ 00:18:26.382 { 00:18:26.382 "name": "324cc07a-0bac-4d47-88bf-fb06252542ad", 00:18:26.382 "aliases": [ 00:18:26.382 "lvs/lvol" 00:18:26.382 ], 00:18:26.382 "product_name": "Logical Volume", 00:18:26.382 "block_size": 4096, 00:18:26.382 "num_blocks": 38912, 00:18:26.382 "uuid": "324cc07a-0bac-4d47-88bf-fb06252542ad", 00:18:26.382 "assigned_rate_limits": { 00:18:26.382 "rw_ios_per_sec": 0, 00:18:26.382 "rw_mbytes_per_sec": 0, 00:18:26.382 "r_mbytes_per_sec": 0, 00:18:26.382 "w_mbytes_per_sec": 0 00:18:26.382 }, 00:18:26.382 "claimed": false, 00:18:26.382 "zoned": false, 00:18:26.382 "supported_io_types": { 00:18:26.382 "read": true, 00:18:26.382 "write": true, 00:18:26.382 "unmap": true, 00:18:26.382 "write_zeroes": true, 00:18:26.382 "flush": false, 00:18:26.382 "reset": true, 00:18:26.382 "compare": false, 00:18:26.382 "compare_and_write": false, 00:18:26.382 "abort": false, 00:18:26.382 "nvme_admin": false, 00:18:26.382 "nvme_io": false 00:18:26.382 }, 00:18:26.382 "driver_specific": { 00:18:26.382 "lvol": { 00:18:26.382 "lvol_store_uuid": "b04ec263-a959-4144-8232-f51c41971f56", 00:18:26.382 "base_bdev": "aio_bdev", 00:18:26.382 "thin_provision": false, 00:18:26.382 "snapshot": false, 00:18:26.382 "clone": false, 00:18:26.382 "esnap_clone": false 00:18:26.382 } 00:18:26.382 } 00:18:26.382 } 00:18:26.382 ] 00:18:26.382 22:04:37 -- common/autotest_common.sh@895 -- # return 0 00:18:26.382 22:04:37 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:26.382 22:04:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:26.382 22:04:37 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:26.382 22:04:37 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b04ec263-a959-4144-8232-f51c41971f56 00:18:26.641 22:04:37 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:26.641 22:04:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:26.641 22:04:37 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 324cc07a-0bac-4d47-88bf-fb06252542ad 00:18:26.900 22:04:37 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b04ec263-a959-4144-8232-f51c41971f56 00:18:26.900 22:04:38 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.159 00:18:27.159 real 0m15.256s 00:18:27.159 user 0m15.179s 00:18:27.159 sys 0m1.121s 00:18:27.159 22:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.159 22:04:38 -- common/autotest_common.sh@10 -- # set +x 00:18:27.159 ************************************ 00:18:27.159 END TEST lvs_grow_clean 00:18:27.159 ************************************ 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:27.159 22:04:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:27.159 22:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:27.159 22:04:38 -- common/autotest_common.sh@10 -- # set +x 00:18:27.159 ************************************ 00:18:27.159 START TEST lvs_grow_dirty 00:18:27.159 ************************************ 00:18:27.159 22:04:38 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.159 22:04:38 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:27.418 22:04:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:27.418 22:04:38 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:27.678 22:04:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:27.678 22:04:38 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:27.678 22:04:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:27.678 22:04:38 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:27.678 22:04:38 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:27.678 22:04:38 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 lvol 150 00:18:27.937 22:04:38 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1750f230-e30b-4807-aec2-af56fab060c5 00:18:27.937 22:04:38 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:27.937 22:04:38 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:27.937 [2024-07-26 22:04:39.129255] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:27.937 [2024-07-26 22:04:39.129305] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:27.937 true 00:18:27.937 22:04:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:27.937 22:04:39 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:28.196 22:04:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:28.196 22:04:39 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:28.455 22:04:39 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1750f230-e30b-4807-aec2-af56fab060c5 00:18:28.455 22:04:39 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:28.714 22:04:39 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:28.714 22:04:39 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:28.714 22:04:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2186348 00:18:28.714 22:04:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.714 22:04:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2186348 /var/tmp/bdevperf.sock 00:18:28.714 22:04:39 -- common/autotest_common.sh@819 -- # '[' -z 2186348 ']' 00:18:28.714 22:04:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.714 22:04:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:28.714 22:04:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.714 22:04:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:28.714 22:04:39 -- common/autotest_common.sh@10 -- # set +x 00:18:28.973 [2024-07-26 22:04:39.954921] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:28.973 [2024-07-26 22:04:39.954975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186348 ] 00:18:28.973 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.973 [2024-07-26 22:04:40.040856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.973 [2024-07-26 22:04:40.077739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.541 22:04:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:29.541 22:04:40 -- common/autotest_common.sh@852 -- # return 0 00:18:29.541 22:04:40 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:29.800 Nvme0n1 00:18:29.800 22:04:40 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:30.059 [ 00:18:30.059 { 00:18:30.059 "name": "Nvme0n1", 00:18:30.059 "aliases": [ 00:18:30.059 "1750f230-e30b-4807-aec2-af56fab060c5" 00:18:30.059 ], 00:18:30.059 "product_name": "NVMe disk", 00:18:30.059 "block_size": 4096, 00:18:30.059 "num_blocks": 38912, 00:18:30.059 "uuid": "1750f230-e30b-4807-aec2-af56fab060c5", 00:18:30.059 "assigned_rate_limits": { 00:18:30.059 "rw_ios_per_sec": 0, 00:18:30.059 "rw_mbytes_per_sec": 0, 00:18:30.059 "r_mbytes_per_sec": 0, 00:18:30.059 "w_mbytes_per_sec": 0 00:18:30.059 }, 00:18:30.059 "claimed": false, 00:18:30.059 "zoned": false, 00:18:30.059 "supported_io_types": { 00:18:30.059 "read": true, 00:18:30.059 "write": true, 00:18:30.059 "unmap": true, 00:18:30.059 "write_zeroes": true, 00:18:30.059 "flush": true, 00:18:30.059 "reset": true, 00:18:30.059 "compare": true, 00:18:30.059 "compare_and_write": true, 00:18:30.059 "abort": true, 00:18:30.060 "nvme_admin": true, 00:18:30.060 "nvme_io": true 00:18:30.060 }, 00:18:30.060 "memory_domains": [ 00:18:30.060 { 00:18:30.060 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:30.060 "dma_device_type": 0 00:18:30.060 } 00:18:30.060 ], 00:18:30.060 "driver_specific": { 00:18:30.060 "nvme": [ 00:18:30.060 { 00:18:30.060 "trid": { 00:18:30.060 "trtype": "RDMA", 00:18:30.060 "adrfam": "IPv4", 00:18:30.060 "traddr": "192.168.100.8", 00:18:30.060 "trsvcid": "4420", 00:18:30.060 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:30.060 }, 00:18:30.060 "ctrlr_data": { 00:18:30.060 "cntlid": 1, 00:18:30.060 "vendor_id": "0x8086", 00:18:30.060 "model_number": "SPDK bdev Controller", 00:18:30.060 "serial_number": "SPDK0", 00:18:30.060 "firmware_revision": "24.01.1", 00:18:30.060 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:30.060 "oacs": { 00:18:30.060 "security": 0, 00:18:30.060 "format": 0, 00:18:30.060 "firmware": 0, 00:18:30.060 "ns_manage": 0 00:18:30.060 }, 00:18:30.060 "multi_ctrlr": true, 00:18:30.060 "ana_reporting": false 00:18:30.060 }, 00:18:30.060 "vs": { 00:18:30.060 "nvme_version": "1.3" 00:18:30.060 }, 00:18:30.060 "ns_data": { 00:18:30.060 "id": 1, 00:18:30.060 "can_share": true 00:18:30.060 } 00:18:30.060 } 00:18:30.060 ], 00:18:30.060 "mp_policy": "active_passive" 00:18:30.060 } 00:18:30.060 } 00:18:30.060 ] 00:18:30.060 22:04:41 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2186620 00:18:30.060 22:04:41 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:30.060 22:04:41 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:30.060 Running I/O for 10 seconds... 00:18:31.438 Latency(us) 00:18:31.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:31.438 Nvme0n1 : 1.00 36704.00 143.38 0.00 0.00 0.00 0.00 0.00 00:18:31.438 =================================================================================================================== 00:18:31.438 Total : 36704.00 143.38 0.00 0.00 0.00 0.00 0.00 00:18:31.438 00:18:32.008 22:04:43 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:32.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:32.266 Nvme0n1 : 2.00 37041.00 144.69 0.00 0.00 0.00 0.00 0.00 00:18:32.266 =================================================================================================================== 00:18:32.266 Total : 37041.00 144.69 0.00 0.00 0.00 0.00 0.00 00:18:32.266 00:18:32.266 true 00:18:32.266 22:04:43 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:32.266 22:04:43 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:32.524 22:04:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:32.524 22:04:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:32.524 22:04:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 2186620 00:18:33.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:33.092 Nvme0n1 : 3.00 37120.67 145.00 0.00 0.00 0.00 0.00 0.00 00:18:33.092 =================================================================================================================== 00:18:33.092 Total : 37120.67 145.00 0.00 0.00 0.00 0.00 0.00 00:18:33.092 00:18:34.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:34.471 Nvme0n1 : 4.00 37248.50 145.50 0.00 0.00 0.00 0.00 0.00 00:18:34.471 =================================================================================================================== 00:18:34.471 Total : 37248.50 145.50 0.00 0.00 0.00 0.00 0.00 00:18:34.471 00:18:35.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:35.040 Nvme0n1 : 5.00 37343.00 145.87 0.00 0.00 0.00 0.00 0.00 00:18:35.040 =================================================================================================================== 00:18:35.040 Total : 37343.00 145.87 0.00 0.00 0.00 0.00 0.00 00:18:35.040 00:18:36.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:36.419 Nvme0n1 : 6.00 37403.67 146.11 0.00 0.00 0.00 0.00 0.00 00:18:36.419 =================================================================================================================== 00:18:36.419 Total : 37403.67 146.11 0.00 0.00 0.00 0.00 0.00 00:18:36.419 00:18:37.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.356 Nvme0n1 : 7.00 37457.57 146.32 0.00 0.00 0.00 0.00 0.00 00:18:37.356 =================================================================================================================== 00:18:37.356 Total : 37457.57 146.32 0.00 0.00 0.00 0.00 0.00 00:18:37.356 00:18:38.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.310 Nvme0n1 : 8.00 37487.38 146.44 0.00 0.00 0.00 0.00 0.00 00:18:38.310 =================================================================================================================== 00:18:38.310 Total : 37487.38 146.44 0.00 0.00 0.00 0.00 0.00 00:18:38.310 00:18:39.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.296 Nvme0n1 : 9.00 37472.67 146.38 0.00 0.00 0.00 0.00 0.00 00:18:39.296 =================================================================================================================== 00:18:39.296 Total : 37472.67 146.38 0.00 0.00 0.00 0.00 0.00 00:18:39.296 00:18:40.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.233 Nvme0n1 : 10.00 37494.70 146.46 0.00 0.00 0.00 0.00 0.00 00:18:40.233 =================================================================================================================== 00:18:40.233 Total : 37494.70 146.46 0.00 0.00 0.00 0.00 0.00 00:18:40.233 00:18:40.233 00:18:40.233 Latency(us) 00:18:40.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.233 Nvme0n1 : 10.00 37495.60 146.47 0.00 0.00 3411.49 2516.58 9961.47 00:18:40.233 =================================================================================================================== 00:18:40.233 Total : 37495.60 146.47 0.00 0.00 3411.49 2516.58 9961.47 00:18:40.233 0 00:18:40.233 22:04:51 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2186348 00:18:40.233 22:04:51 -- common/autotest_common.sh@926 -- # '[' -z 2186348 ']' 00:18:40.233 22:04:51 -- common/autotest_common.sh@930 -- # kill -0 2186348 00:18:40.233 22:04:51 -- common/autotest_common.sh@931 -- # uname 00:18:40.233 22:04:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:40.233 22:04:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2186348 00:18:40.233 22:04:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:40.233 22:04:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:40.233 22:04:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2186348' 00:18:40.233 killing process with pid 2186348 00:18:40.233 22:04:51 -- common/autotest_common.sh@945 -- # kill 2186348 00:18:40.233 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.233 00:18:40.233 Latency(us) 00:18:40.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.233 =================================================================================================================== 00:18:40.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.233 22:04:51 -- common/autotest_common.sh@950 -- # wait 2186348 00:18:40.493 22:04:51 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2183232 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@74 -- # wait 2183232 00:18:40.751 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2183232 Killed "${NVMF_APP[@]}" "$@" 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@74 -- # true 00:18:40.751 22:04:51 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:18:40.751 22:04:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:40.751 22:04:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:40.751 22:04:51 -- common/autotest_common.sh@10 -- # set +x 00:18:40.751 22:04:51 -- nvmf/common.sh@469 -- # nvmfpid=2188476 00:18:40.751 22:04:51 -- nvmf/common.sh@470 -- # waitforlisten 2188476 00:18:40.751 22:04:51 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:40.751 22:04:51 -- common/autotest_common.sh@819 -- # '[' -z 2188476 ']' 00:18:40.751 22:04:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.751 22:04:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:40.751 22:04:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.751 22:04:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:40.751 22:04:51 -- common/autotest_common.sh@10 -- # set +x 00:18:41.022 [2024-07-26 22:04:51.990984] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:41.022 [2024-07-26 22:04:51.991039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.022 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.022 [2024-07-26 22:04:52.078332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.022 [2024-07-26 22:04:52.115717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:41.022 [2024-07-26 22:04:52.115821] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.022 [2024-07-26 22:04:52.115830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.022 [2024-07-26 22:04:52.115839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.022 [2024-07-26 22:04:52.115856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.593 22:04:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:41.593 22:04:52 -- common/autotest_common.sh@852 -- # return 0 00:18:41.593 22:04:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:41.593 22:04:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:41.593 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:18:41.593 22:04:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.593 22:04:52 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:41.851 [2024-07-26 22:04:52.959356] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:41.851 [2024-07-26 22:04:52.959454] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:41.851 [2024-07-26 22:04:52.959480] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:41.851 22:04:52 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:18:41.851 22:04:52 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 1750f230-e30b-4807-aec2-af56fab060c5 00:18:41.851 22:04:52 -- common/autotest_common.sh@887 -- # local bdev_name=1750f230-e30b-4807-aec2-af56fab060c5 00:18:41.851 22:04:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:41.851 22:04:52 -- common/autotest_common.sh@889 -- # local i 00:18:41.851 22:04:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:41.851 22:04:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:41.851 22:04:52 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:42.109 22:04:53 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1750f230-e30b-4807-aec2-af56fab060c5 -t 2000 00:18:42.109 [ 00:18:42.109 { 00:18:42.109 "name": "1750f230-e30b-4807-aec2-af56fab060c5", 00:18:42.109 "aliases": [ 00:18:42.109 "lvs/lvol" 00:18:42.109 ], 00:18:42.109 "product_name": "Logical Volume", 00:18:42.109 "block_size": 4096, 00:18:42.109 "num_blocks": 38912, 00:18:42.109 "uuid": "1750f230-e30b-4807-aec2-af56fab060c5", 00:18:42.109 "assigned_rate_limits": { 00:18:42.109 "rw_ios_per_sec": 0, 00:18:42.109 "rw_mbytes_per_sec": 0, 00:18:42.109 "r_mbytes_per_sec": 0, 00:18:42.110 "w_mbytes_per_sec": 0 00:18:42.110 }, 00:18:42.110 "claimed": false, 00:18:42.110 "zoned": false, 00:18:42.110 "supported_io_types": { 00:18:42.110 "read": true, 00:18:42.110 "write": true, 00:18:42.110 "unmap": true, 00:18:42.110 "write_zeroes": true, 00:18:42.110 "flush": false, 00:18:42.110 "reset": true, 00:18:42.110 "compare": false, 00:18:42.110 "compare_and_write": false, 00:18:42.110 "abort": false, 00:18:42.110 "nvme_admin": false, 00:18:42.110 "nvme_io": false 00:18:42.110 }, 00:18:42.110 "driver_specific": { 00:18:42.110 "lvol": { 00:18:42.110 "lvol_store_uuid": "fa0aedbc-c66c-4913-8d0b-9677981e3b24", 00:18:42.110 "base_bdev": "aio_bdev", 00:18:42.110 "thin_provision": false, 00:18:42.110 "snapshot": false, 00:18:42.110 "clone": false, 00:18:42.110 "esnap_clone": false 00:18:42.110 } 00:18:42.110 } 00:18:42.110 } 00:18:42.110 ] 00:18:42.110 22:04:53 -- common/autotest_common.sh@895 -- # return 0 00:18:42.110 22:04:53 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:42.110 22:04:53 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:18:42.368 22:04:53 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:18:42.368 22:04:53 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:42.368 22:04:53 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:18:42.627 22:04:53 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:18:42.627 22:04:53 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:42.627 [2024-07-26 22:04:53.739582] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:42.627 22:04:53 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:42.627 22:04:53 -- common/autotest_common.sh@640 -- # local es=0 00:18:42.627 22:04:53 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:42.627 22:04:53 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:42.627 22:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.627 22:04:53 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:42.627 22:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.627 22:04:53 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:42.627 22:04:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.627 22:04:53 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:42.627 22:04:53 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:42.627 22:04:53 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:42.887 request: 00:18:42.887 { 00:18:42.887 "uuid": "fa0aedbc-c66c-4913-8d0b-9677981e3b24", 00:18:42.887 "method": "bdev_lvol_get_lvstores", 00:18:42.887 "req_id": 1 00:18:42.887 } 00:18:42.887 Got JSON-RPC error response 00:18:42.887 response: 00:18:42.887 { 00:18:42.887 "code": -19, 00:18:42.887 "message": "No such device" 00:18:42.887 } 00:18:42.887 22:04:53 -- common/autotest_common.sh@643 -- # es=1 00:18:42.887 22:04:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:42.887 22:04:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:42.887 22:04:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:42.887 22:04:53 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:42.887 aio_bdev 00:18:42.887 22:04:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1750f230-e30b-4807-aec2-af56fab060c5 00:18:42.887 22:04:54 -- common/autotest_common.sh@887 -- # local bdev_name=1750f230-e30b-4807-aec2-af56fab060c5 00:18:42.887 22:04:54 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:42.887 22:04:54 -- common/autotest_common.sh@889 -- # local i 00:18:42.887 22:04:54 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:42.887 22:04:54 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:42.887 22:04:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:43.146 22:04:54 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1750f230-e30b-4807-aec2-af56fab060c5 -t 2000 00:18:43.404 [ 00:18:43.404 { 00:18:43.404 "name": "1750f230-e30b-4807-aec2-af56fab060c5", 00:18:43.404 "aliases": [ 00:18:43.404 "lvs/lvol" 00:18:43.404 ], 00:18:43.404 "product_name": "Logical Volume", 00:18:43.404 "block_size": 4096, 00:18:43.404 "num_blocks": 38912, 00:18:43.404 "uuid": "1750f230-e30b-4807-aec2-af56fab060c5", 00:18:43.404 "assigned_rate_limits": { 00:18:43.404 "rw_ios_per_sec": 0, 00:18:43.404 "rw_mbytes_per_sec": 0, 00:18:43.404 "r_mbytes_per_sec": 0, 00:18:43.404 "w_mbytes_per_sec": 0 00:18:43.404 }, 00:18:43.404 "claimed": false, 00:18:43.404 "zoned": false, 00:18:43.404 "supported_io_types": { 00:18:43.404 "read": true, 00:18:43.404 "write": true, 00:18:43.404 "unmap": true, 00:18:43.404 "write_zeroes": true, 00:18:43.404 "flush": false, 00:18:43.404 "reset": true, 00:18:43.404 "compare": false, 00:18:43.404 "compare_and_write": false, 00:18:43.404 "abort": false, 00:18:43.404 "nvme_admin": false, 00:18:43.404 "nvme_io": false 00:18:43.404 }, 00:18:43.404 "driver_specific": { 00:18:43.404 "lvol": { 00:18:43.404 "lvol_store_uuid": "fa0aedbc-c66c-4913-8d0b-9677981e3b24", 00:18:43.404 "base_bdev": "aio_bdev", 00:18:43.404 "thin_provision": false, 00:18:43.404 "snapshot": false, 00:18:43.404 "clone": false, 00:18:43.404 "esnap_clone": false 00:18:43.404 } 00:18:43.404 } 00:18:43.404 } 00:18:43.404 ] 00:18:43.404 22:04:54 -- common/autotest_common.sh@895 -- # return 0 00:18:43.404 22:04:54 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:43.404 22:04:54 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:43.404 22:04:54 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:43.404 22:04:54 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:43.404 22:04:54 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:43.663 22:04:54 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:43.663 22:04:54 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1750f230-e30b-4807-aec2-af56fab060c5 00:18:43.663 22:04:54 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa0aedbc-c66c-4913-8d0b-9677981e3b24 00:18:43.920 22:04:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:44.178 22:04:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:44.178 00:18:44.178 real 0m16.948s 00:18:44.178 user 0m43.943s 00:18:44.178 sys 0m3.270s 00:18:44.178 22:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.178 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:18:44.178 ************************************ 00:18:44.178 END TEST lvs_grow_dirty 00:18:44.178 ************************************ 00:18:44.178 22:04:55 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:44.178 22:04:55 -- common/autotest_common.sh@796 -- # type=--id 00:18:44.178 22:04:55 -- common/autotest_common.sh@797 -- # id=0 00:18:44.178 22:04:55 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:44.178 22:04:55 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:44.178 22:04:55 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:44.178 22:04:55 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:44.178 22:04:55 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:44.178 22:04:55 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:44.178 nvmf_trace.0 00:18:44.178 22:04:55 -- common/autotest_common.sh@811 -- # return 0 00:18:44.178 22:04:55 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:44.178 22:04:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:44.178 22:04:55 -- nvmf/common.sh@116 -- # sync 00:18:44.178 22:04:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:44.178 22:04:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:44.178 22:04:55 -- nvmf/common.sh@119 -- # set +e 00:18:44.178 22:04:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:44.178 22:04:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:44.178 rmmod nvme_rdma 00:18:44.178 rmmod nvme_fabrics 00:18:44.178 22:04:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:44.178 22:04:55 -- nvmf/common.sh@123 -- # set -e 00:18:44.178 22:04:55 -- nvmf/common.sh@124 -- # return 0 00:18:44.178 22:04:55 -- nvmf/common.sh@477 -- # '[' -n 2188476 ']' 00:18:44.178 22:04:55 -- nvmf/common.sh@478 -- # killprocess 2188476 00:18:44.178 22:04:55 -- common/autotest_common.sh@926 -- # '[' -z 2188476 ']' 00:18:44.178 22:04:55 -- common/autotest_common.sh@930 -- # kill -0 2188476 00:18:44.178 22:04:55 -- common/autotest_common.sh@931 -- # uname 00:18:44.178 22:04:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.178 22:04:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2188476 00:18:44.438 22:04:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:44.438 22:04:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:44.438 22:04:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2188476' 00:18:44.438 killing process with pid 2188476 00:18:44.438 22:04:55 -- common/autotest_common.sh@945 -- # kill 2188476 00:18:44.438 22:04:55 -- common/autotest_common.sh@950 -- # wait 2188476 00:18:44.438 22:04:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:44.438 22:04:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:44.438 00:18:44.438 real 0m41.438s 00:18:44.438 user 1m5.060s 00:18:44.438 sys 0m10.579s 00:18:44.438 22:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.438 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:18:44.438 ************************************ 00:18:44.438 END TEST nvmf_lvs_grow 00:18:44.438 ************************************ 00:18:44.438 22:04:55 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:44.438 22:04:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:44.438 22:04:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:44.438 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:18:44.438 ************************************ 00:18:44.438 START TEST nvmf_bdev_io_wait 00:18:44.438 ************************************ 00:18:44.438 22:04:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:18:44.698 * Looking for test storage... 00:18:44.698 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:44.698 22:04:55 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.698 22:04:55 -- nvmf/common.sh@7 -- # uname -s 00:18:44.698 22:04:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.698 22:04:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.698 22:04:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.698 22:04:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.698 22:04:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.698 22:04:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.698 22:04:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.698 22:04:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.698 22:04:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.698 22:04:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.698 22:04:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:44.698 22:04:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:44.698 22:04:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.698 22:04:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.698 22:04:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.698 22:04:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:44.698 22:04:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.698 22:04:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.698 22:04:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.698 22:04:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.698 22:04:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.698 22:04:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.698 22:04:55 -- paths/export.sh@5 -- # export PATH 00:18:44.698 22:04:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.698 22:04:55 -- nvmf/common.sh@46 -- # : 0 00:18:44.698 22:04:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:44.698 22:04:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:44.698 22:04:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:44.698 22:04:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.698 22:04:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.698 22:04:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:44.698 22:04:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:44.698 22:04:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:44.698 22:04:55 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:44.698 22:04:55 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:44.698 22:04:55 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:44.698 22:04:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:44.698 22:04:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.698 22:04:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:44.698 22:04:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:44.698 22:04:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:44.698 22:04:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.698 22:04:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.698 22:04:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.698 22:04:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:44.698 22:04:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:44.698 22:04:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:44.698 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:18:52.821 22:05:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:52.821 22:05:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:52.821 22:05:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:52.821 22:05:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:52.821 22:05:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:52.821 22:05:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:52.821 22:05:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:52.821 22:05:04 -- nvmf/common.sh@294 -- # net_devs=() 00:18:52.821 22:05:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:52.821 22:05:04 -- nvmf/common.sh@295 -- # e810=() 00:18:52.821 22:05:04 -- nvmf/common.sh@295 -- # local -ga e810 00:18:52.821 22:05:04 -- nvmf/common.sh@296 -- # x722=() 00:18:52.821 22:05:04 -- nvmf/common.sh@296 -- # local -ga x722 00:18:52.821 22:05:04 -- nvmf/common.sh@297 -- # mlx=() 00:18:52.821 22:05:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:52.821 22:05:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.821 22:05:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:52.821 22:05:04 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:52.821 22:05:04 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:52.821 22:05:04 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:52.821 22:05:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:52.821 22:05:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:52.821 22:05:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:52.821 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:52.821 22:05:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:52.821 22:05:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:52.821 22:05:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:52.821 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:52.821 22:05:04 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:52.821 22:05:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:52.821 22:05:04 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:52.821 22:05:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:52.821 22:05:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.821 22:05:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:52.821 22:05:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.821 22:05:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:52.821 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:52.821 22:05:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.821 22:05:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:52.822 22:05:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.822 22:05:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:52.822 22:05:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.822 22:05:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:52.822 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:52.822 22:05:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.822 22:05:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:52.822 22:05:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:52.822 22:05:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:52.822 22:05:04 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:52.822 22:05:04 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:52.822 22:05:04 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:52.822 22:05:04 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:52.822 22:05:04 -- nvmf/common.sh@57 -- # uname 00:18:53.081 22:05:04 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:53.081 22:05:04 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:53.081 22:05:04 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:53.081 22:05:04 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:53.081 22:05:04 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:53.081 22:05:04 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:53.081 22:05:04 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:53.081 22:05:04 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:53.081 22:05:04 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:53.081 22:05:04 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:53.081 22:05:04 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:53.081 22:05:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:53.081 22:05:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:53.081 22:05:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:53.081 22:05:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:53.081 22:05:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:53.081 22:05:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:53.081 22:05:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.081 22:05:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:53.081 22:05:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:53.081 22:05:04 -- nvmf/common.sh@104 -- # continue 2 00:18:53.081 22:05:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:53.081 22:05:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.081 22:05:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:53.081 22:05:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.081 22:05:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:53.081 22:05:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:53.081 22:05:04 -- nvmf/common.sh@104 -- # continue 2 00:18:53.081 22:05:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:53.081 22:05:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:53.081 22:05:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:53.081 22:05:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:53.081 22:05:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:53.081 22:05:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:53.081 22:05:04 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:53.081 22:05:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:53.081 22:05:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:53.081 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:53.081 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:53.081 altname enp217s0f0np0 00:18:53.081 altname ens818f0np0 00:18:53.081 inet 192.168.100.8/24 scope global mlx_0_0 00:18:53.081 valid_lft forever preferred_lft forever 00:18:53.081 22:05:04 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:53.081 22:05:04 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:53.081 22:05:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:53.081 22:05:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:53.081 22:05:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:53.081 22:05:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:53.081 22:05:04 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:53.081 22:05:04 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:53.081 22:05:04 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:53.081 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:53.081 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:53.081 altname enp217s0f1np1 00:18:53.081 altname ens818f1np1 00:18:53.081 inet 192.168.100.9/24 scope global mlx_0_1 00:18:53.081 valid_lft forever preferred_lft forever 00:18:53.081 22:05:04 -- nvmf/common.sh@410 -- # return 0 00:18:53.081 22:05:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:53.081 22:05:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:53.081 22:05:04 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:53.081 22:05:04 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:53.081 22:05:04 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:53.081 22:05:04 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:53.081 22:05:04 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:53.081 22:05:04 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:53.081 22:05:04 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:53.081 22:05:04 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:53.081 22:05:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:53.082 22:05:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.082 22:05:04 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:53.082 22:05:04 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:53.082 22:05:04 -- nvmf/common.sh@104 -- # continue 2 00:18:53.082 22:05:04 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:53.082 22:05:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.082 22:05:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:53.082 22:05:04 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:53.082 22:05:04 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:53.082 22:05:04 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:53.082 22:05:04 -- nvmf/common.sh@104 -- # continue 2 00:18:53.082 22:05:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:53.082 22:05:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:53.082 22:05:04 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:53.082 22:05:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:53.082 22:05:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:53.082 22:05:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:53.082 22:05:04 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:53.082 22:05:04 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:53.082 22:05:04 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:53.082 22:05:04 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:53.082 22:05:04 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:53.082 22:05:04 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:53.082 22:05:04 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:53.082 192.168.100.9' 00:18:53.082 22:05:04 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:53.082 192.168.100.9' 00:18:53.082 22:05:04 -- nvmf/common.sh@445 -- # head -n 1 00:18:53.082 22:05:04 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:53.082 22:05:04 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:53.082 192.168.100.9' 00:18:53.082 22:05:04 -- nvmf/common.sh@446 -- # tail -n +2 00:18:53.082 22:05:04 -- nvmf/common.sh@446 -- # head -n 1 00:18:53.082 22:05:04 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:53.082 22:05:04 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:53.082 22:05:04 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:53.082 22:05:04 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:53.082 22:05:04 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:53.082 22:05:04 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:53.082 22:05:04 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:53.082 22:05:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:53.082 22:05:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:53.082 22:05:04 -- common/autotest_common.sh@10 -- # set +x 00:18:53.082 22:05:04 -- nvmf/common.sh@469 -- # nvmfpid=2193109 00:18:53.082 22:05:04 -- nvmf/common.sh@470 -- # waitforlisten 2193109 00:18:53.082 22:05:04 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:53.082 22:05:04 -- common/autotest_common.sh@819 -- # '[' -z 2193109 ']' 00:18:53.082 22:05:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.082 22:05:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:53.082 22:05:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.082 22:05:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:53.082 22:05:04 -- common/autotest_common.sh@10 -- # set +x 00:18:53.341 [2024-07-26 22:05:04.348739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:53.341 [2024-07-26 22:05:04.348797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.341 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.341 [2024-07-26 22:05:04.436348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:53.341 [2024-07-26 22:05:04.476767] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:53.341 [2024-07-26 22:05:04.476874] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.341 [2024-07-26 22:05:04.476884] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.341 [2024-07-26 22:05:04.476893] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.341 [2024-07-26 22:05:04.476943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.341 [2024-07-26 22:05:04.477045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.341 [2024-07-26 22:05:04.477129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:53.341 [2024-07-26 22:05:04.477131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.278 22:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:54.278 22:05:05 -- common/autotest_common.sh@852 -- # return 0 00:18:54.278 22:05:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:54.278 22:05:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 22:05:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 [2024-07-26 22:05:05.292432] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12e5410/0x12e9900) succeed. 00:18:54.278 [2024-07-26 22:05:05.302319] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12e6a00/0x132af90) succeed. 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 Malloc0 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:54.278 22:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:54.278 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:18:54.278 [2024-07-26 22:05:05.476787] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:54.278 22:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2193340 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@30 -- # READ_PID=2193342 00:18:54.278 22:05:05 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:54.278 22:05:05 -- nvmf/common.sh@520 -- # config=() 00:18:54.278 22:05:05 -- nvmf/common.sh@520 -- # local subsystem config 00:18:54.278 22:05:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:54.278 22:05:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:54.278 { 00:18:54.278 "params": { 00:18:54.278 "name": "Nvme$subsystem", 00:18:54.278 "trtype": "$TEST_TRANSPORT", 00:18:54.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.278 "adrfam": "ipv4", 00:18:54.278 "trsvcid": "$NVMF_PORT", 00:18:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.278 "hdgst": ${hdgst:-false}, 00:18:54.278 "ddgst": ${ddgst:-false} 00:18:54.278 }, 00:18:54.278 "method": "bdev_nvme_attach_controller" 00:18:54.278 } 00:18:54.278 EOF 00:18:54.279 )") 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2193344 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:54.279 22:05:05 -- nvmf/common.sh@520 -- # config=() 00:18:54.279 22:05:05 -- nvmf/common.sh@520 -- # local subsystem config 00:18:54.279 22:05:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:54.279 { 00:18:54.279 "params": { 00:18:54.279 "name": "Nvme$subsystem", 00:18:54.279 "trtype": "$TEST_TRANSPORT", 00:18:54.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.279 "adrfam": "ipv4", 00:18:54.279 "trsvcid": "$NVMF_PORT", 00:18:54.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.279 "hdgst": ${hdgst:-false}, 00:18:54.279 "ddgst": ${ddgst:-false} 00:18:54.279 }, 00:18:54.279 "method": "bdev_nvme_attach_controller" 00:18:54.279 } 00:18:54.279 EOF 00:18:54.279 )") 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2193347 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@35 -- # sync 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # cat 00:18:54.279 22:05:05 -- nvmf/common.sh@520 -- # config=() 00:18:54.279 22:05:05 -- nvmf/common.sh@520 -- # local subsystem config 00:18:54.279 22:05:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:54.279 { 00:18:54.279 "params": { 00:18:54.279 "name": "Nvme$subsystem", 00:18:54.279 "trtype": "$TEST_TRANSPORT", 00:18:54.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.279 "adrfam": "ipv4", 00:18:54.279 "trsvcid": "$NVMF_PORT", 00:18:54.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.279 "hdgst": ${hdgst:-false}, 00:18:54.279 "ddgst": ${ddgst:-false} 00:18:54.279 }, 00:18:54.279 "method": "bdev_nvme_attach_controller" 00:18:54.279 } 00:18:54.279 EOF 00:18:54.279 )") 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:54.279 22:05:05 -- nvmf/common.sh@520 -- # config=() 00:18:54.279 22:05:05 -- nvmf/common.sh@520 -- # local subsystem config 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # cat 00:18:54.279 22:05:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:54.279 { 00:18:54.279 "params": { 00:18:54.279 "name": "Nvme$subsystem", 00:18:54.279 "trtype": "$TEST_TRANSPORT", 00:18:54.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:54.279 "adrfam": "ipv4", 00:18:54.279 "trsvcid": "$NVMF_PORT", 00:18:54.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:54.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:54.279 "hdgst": ${hdgst:-false}, 00:18:54.279 "ddgst": ${ddgst:-false} 00:18:54.279 }, 00:18:54.279 "method": "bdev_nvme_attach_controller" 00:18:54.279 } 00:18:54.279 EOF 00:18:54.279 )") 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # cat 00:18:54.279 22:05:05 -- target/bdev_io_wait.sh@37 -- # wait 2193340 00:18:54.279 22:05:05 -- nvmf/common.sh@542 -- # cat 00:18:54.279 22:05:05 -- nvmf/common.sh@544 -- # jq . 00:18:54.279 22:05:05 -- nvmf/common.sh@544 -- # jq . 00:18:54.279 22:05:05 -- nvmf/common.sh@544 -- # jq . 00:18:54.279 22:05:05 -- nvmf/common.sh@545 -- # IFS=, 00:18:54.279 22:05:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:54.279 "params": { 00:18:54.279 "name": "Nvme1", 00:18:54.279 "trtype": "rdma", 00:18:54.279 "traddr": "192.168.100.8", 00:18:54.279 "adrfam": "ipv4", 00:18:54.279 "trsvcid": "4420", 00:18:54.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.279 "hdgst": false, 00:18:54.279 "ddgst": false 00:18:54.279 }, 00:18:54.279 "method": "bdev_nvme_attach_controller" 00:18:54.279 }' 00:18:54.279 22:05:05 -- nvmf/common.sh@544 -- # jq . 00:18:54.279 22:05:05 -- nvmf/common.sh@545 -- # IFS=, 00:18:54.279 22:05:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:54.279 "params": { 00:18:54.279 "name": "Nvme1", 00:18:54.279 "trtype": "rdma", 00:18:54.279 "traddr": "192.168.100.8", 00:18:54.279 "adrfam": "ipv4", 00:18:54.279 "trsvcid": "4420", 00:18:54.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.279 "hdgst": false, 00:18:54.279 "ddgst": false 00:18:54.279 }, 00:18:54.279 "method": "bdev_nvme_attach_controller" 00:18:54.279 }' 00:18:54.279 22:05:05 -- nvmf/common.sh@545 -- # IFS=, 00:18:54.279 22:05:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:54.279 "params": { 00:18:54.279 "name": "Nvme1", 00:18:54.279 "trtype": "rdma", 00:18:54.279 "traddr": "192.168.100.8", 00:18:54.279 "adrfam": "ipv4", 00:18:54.279 "trsvcid": "4420", 00:18:54.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.279 "hdgst": false, 00:18:54.279 "ddgst": false 00:18:54.279 }, 00:18:54.279 "method": "bdev_nvme_attach_controller" 00:18:54.279 }' 00:18:54.539 22:05:05 -- nvmf/common.sh@545 -- # IFS=, 00:18:54.539 22:05:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:54.539 "params": { 00:18:54.539 "name": "Nvme1", 00:18:54.539 "trtype": "rdma", 00:18:54.539 "traddr": "192.168.100.8", 00:18:54.539 "adrfam": "ipv4", 00:18:54.539 "trsvcid": "4420", 00:18:54.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:54.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:54.539 "hdgst": false, 00:18:54.539 "ddgst": false 00:18:54.539 }, 00:18:54.539 "method": "bdev_nvme_attach_controller" 00:18:54.539 }' 00:18:54.539 [2024-07-26 22:05:05.525576] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:54.539 [2024-07-26 22:05:05.525623] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:54.539 [2024-07-26 22:05:05.526787] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:54.539 [2024-07-26 22:05:05.526845] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:54.539 [2024-07-26 22:05:05.528074] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:54.539 [2024-07-26 22:05:05.528123] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:54.539 [2024-07-26 22:05:05.530853] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:54.539 [2024-07-26 22:05:05.530901] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:54.539 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.539 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.539 [2024-07-26 22:05:05.699553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.539 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.539 [2024-07-26 22:05:05.721459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:54.539 [2024-07-26 22:05:05.757863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.798 [2024-07-26 22:05:05.780107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:54.798 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.798 [2024-07-26 22:05:05.855256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.798 [2024-07-26 22:05:05.878797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:54.798 [2024-07-26 22:05:05.951897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.798 [2024-07-26 22:05:05.980427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:55.058 Running I/O for 1 seconds... 00:18:55.058 Running I/O for 1 seconds... 00:18:55.058 Running I/O for 1 seconds... 00:18:55.058 Running I/O for 1 seconds... 00:18:55.995 00:18:55.995 Latency(us) 00:18:55.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.995 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:55.995 Nvme1n1 : 1.00 264525.60 1033.30 0.00 0.00 482.73 192.51 2319.97 00:18:55.995 =================================================================================================================== 00:18:55.995 Total : 264525.60 1033.30 0.00 0.00 482.73 192.51 2319.97 00:18:55.995 00:18:55.995 Latency(us) 00:18:55.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.995 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:55.995 Nvme1n1 : 1.01 17687.05 69.09 0.00 0.00 7215.04 4089.45 13841.20 00:18:55.995 =================================================================================================================== 00:18:55.995 Total : 17687.05 69.09 0.00 0.00 7215.04 4089.45 13841.20 00:18:55.995 00:18:55.995 Latency(us) 00:18:55.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.995 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:55.995 Nvme1n1 : 1.00 15348.21 59.95 0.00 0.00 8315.62 4456.45 20027.80 00:18:55.995 =================================================================================================================== 00:18:55.995 Total : 15348.21 59.95 0.00 0.00 8315.62 4456.45 20027.80 00:18:55.995 00:18:55.995 Latency(us) 00:18:55.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.995 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:55.996 Nvme1n1 : 1.01 17983.86 70.25 0.00 0.00 7100.28 5819.60 18454.94 00:18:55.996 =================================================================================================================== 00:18:55.996 Total : 17983.86 70.25 0.00 0.00 7100.28 5819.60 18454.94 00:18:56.255 22:05:07 -- target/bdev_io_wait.sh@38 -- # wait 2193342 00:18:56.255 22:05:07 -- target/bdev_io_wait.sh@39 -- # wait 2193344 00:18:56.255 22:05:07 -- target/bdev_io_wait.sh@40 -- # wait 2193347 00:18:56.255 22:05:07 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.255 22:05:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.255 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:18:56.255 22:05:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.255 22:05:07 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:56.255 22:05:07 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:56.255 22:05:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:56.255 22:05:07 -- nvmf/common.sh@116 -- # sync 00:18:56.255 22:05:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:56.255 22:05:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:56.255 22:05:07 -- nvmf/common.sh@119 -- # set +e 00:18:56.255 22:05:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:56.255 22:05:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:56.255 rmmod nvme_rdma 00:18:56.514 rmmod nvme_fabrics 00:18:56.514 22:05:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:56.514 22:05:07 -- nvmf/common.sh@123 -- # set -e 00:18:56.514 22:05:07 -- nvmf/common.sh@124 -- # return 0 00:18:56.514 22:05:07 -- nvmf/common.sh@477 -- # '[' -n 2193109 ']' 00:18:56.514 22:05:07 -- nvmf/common.sh@478 -- # killprocess 2193109 00:18:56.514 22:05:07 -- common/autotest_common.sh@926 -- # '[' -z 2193109 ']' 00:18:56.514 22:05:07 -- common/autotest_common.sh@930 -- # kill -0 2193109 00:18:56.514 22:05:07 -- common/autotest_common.sh@931 -- # uname 00:18:56.514 22:05:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:56.514 22:05:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2193109 00:18:56.514 22:05:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:56.514 22:05:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:56.514 22:05:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2193109' 00:18:56.514 killing process with pid 2193109 00:18:56.514 22:05:07 -- common/autotest_common.sh@945 -- # kill 2193109 00:18:56.514 22:05:07 -- common/autotest_common.sh@950 -- # wait 2193109 00:18:56.774 22:05:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:56.774 22:05:07 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:56.774 00:18:56.774 real 0m12.145s 00:18:56.774 user 0m20.877s 00:18:56.774 sys 0m7.939s 00:18:56.774 22:05:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:56.774 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:18:56.774 ************************************ 00:18:56.774 END TEST nvmf_bdev_io_wait 00:18:56.774 ************************************ 00:18:56.774 22:05:07 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:56.774 22:05:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:56.774 22:05:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:56.774 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:18:56.774 ************************************ 00:18:56.774 START TEST nvmf_queue_depth 00:18:56.774 ************************************ 00:18:56.774 22:05:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:18:56.774 * Looking for test storage... 00:18:56.774 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:56.774 22:05:07 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.774 22:05:07 -- nvmf/common.sh@7 -- # uname -s 00:18:56.774 22:05:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.774 22:05:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.774 22:05:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.774 22:05:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.774 22:05:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.774 22:05:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.774 22:05:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.774 22:05:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.774 22:05:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.774 22:05:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.774 22:05:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:56.774 22:05:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:56.774 22:05:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.774 22:05:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.774 22:05:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.774 22:05:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:56.774 22:05:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.774 22:05:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.774 22:05:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.774 22:05:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.774 22:05:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.774 22:05:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.774 22:05:07 -- paths/export.sh@5 -- # export PATH 00:18:56.774 22:05:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.774 22:05:07 -- nvmf/common.sh@46 -- # : 0 00:18:56.774 22:05:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:56.774 22:05:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:56.774 22:05:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:56.774 22:05:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.774 22:05:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.774 22:05:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:56.774 22:05:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:56.774 22:05:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:56.774 22:05:07 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:56.774 22:05:07 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:56.774 22:05:07 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.774 22:05:07 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:56.774 22:05:07 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:56.774 22:05:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.774 22:05:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:56.774 22:05:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:56.774 22:05:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:56.774 22:05:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.774 22:05:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.774 22:05:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.774 22:05:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:56.774 22:05:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:56.774 22:05:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:56.774 22:05:07 -- common/autotest_common.sh@10 -- # set +x 00:19:04.935 22:05:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:04.935 22:05:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:04.935 22:05:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:04.935 22:05:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:04.935 22:05:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:04.935 22:05:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:04.935 22:05:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:04.935 22:05:16 -- nvmf/common.sh@294 -- # net_devs=() 00:19:04.935 22:05:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:04.935 22:05:16 -- nvmf/common.sh@295 -- # e810=() 00:19:04.935 22:05:16 -- nvmf/common.sh@295 -- # local -ga e810 00:19:04.935 22:05:16 -- nvmf/common.sh@296 -- # x722=() 00:19:04.935 22:05:16 -- nvmf/common.sh@296 -- # local -ga x722 00:19:04.935 22:05:16 -- nvmf/common.sh@297 -- # mlx=() 00:19:04.935 22:05:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:04.935 22:05:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.935 22:05:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:04.935 22:05:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:04.935 22:05:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:04.935 22:05:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:04.935 22:05:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:04.935 22:05:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:04.935 22:05:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:04.935 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:04.935 22:05:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.935 22:05:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:04.935 22:05:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:04.935 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:04.935 22:05:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.935 22:05:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:04.935 22:05:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:04.935 22:05:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:04.935 22:05:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.935 22:05:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:04.935 22:05:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.935 22:05:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:04.935 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:04.935 22:05:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.935 22:05:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:04.936 22:05:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.936 22:05:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:04.936 22:05:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.936 22:05:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:04.936 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:04.936 22:05:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.936 22:05:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:04.936 22:05:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:04.936 22:05:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:04.936 22:05:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:04.936 22:05:16 -- nvmf/common.sh@57 -- # uname 00:19:04.936 22:05:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:04.936 22:05:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:04.936 22:05:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:04.936 22:05:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:04.936 22:05:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:04.936 22:05:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:04.936 22:05:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:04.936 22:05:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:04.936 22:05:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:04.936 22:05:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:04.936 22:05:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:04.936 22:05:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.936 22:05:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:04.936 22:05:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:04.936 22:05:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.936 22:05:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:04.936 22:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.936 22:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.936 22:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:04.936 22:05:16 -- nvmf/common.sh@104 -- # continue 2 00:19:04.936 22:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:04.936 22:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.936 22:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.936 22:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:04.936 22:05:16 -- nvmf/common.sh@104 -- # continue 2 00:19:04.936 22:05:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:04.936 22:05:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:04.936 22:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:04.936 22:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:04.936 22:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.936 22:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.936 22:05:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:04.936 22:05:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:04.936 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.936 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:04.936 altname enp217s0f0np0 00:19:04.936 altname ens818f0np0 00:19:04.936 inet 192.168.100.8/24 scope global mlx_0_0 00:19:04.936 valid_lft forever preferred_lft forever 00:19:04.936 22:05:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:04.936 22:05:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:04.936 22:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:04.936 22:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:04.936 22:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:04.936 22:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:04.936 22:05:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:04.936 22:05:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:04.936 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.936 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:04.936 altname enp217s0f1np1 00:19:04.936 altname ens818f1np1 00:19:04.936 inet 192.168.100.9/24 scope global mlx_0_1 00:19:04.936 valid_lft forever preferred_lft forever 00:19:04.936 22:05:16 -- nvmf/common.sh@410 -- # return 0 00:19:04.936 22:05:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:04.936 22:05:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:04.936 22:05:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:04.936 22:05:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:04.936 22:05:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:04.936 22:05:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.936 22:05:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:04.936 22:05:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:04.936 22:05:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:05.196 22:05:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:05.196 22:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:05.196 22:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.196 22:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:05.196 22:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:05.196 22:05:16 -- nvmf/common.sh@104 -- # continue 2 00:19:05.196 22:05:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:05.196 22:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.196 22:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:05.196 22:05:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:05.196 22:05:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:05.196 22:05:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:05.196 22:05:16 -- nvmf/common.sh@104 -- # continue 2 00:19:05.196 22:05:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:05.196 22:05:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:05.196 22:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:05.196 22:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:05.196 22:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:05.196 22:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:05.196 22:05:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:05.196 22:05:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:05.196 22:05:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:05.196 22:05:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:05.196 22:05:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:05.196 22:05:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:05.196 22:05:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:05.196 192.168.100.9' 00:19:05.196 22:05:16 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:05.196 192.168.100.9' 00:19:05.196 22:05:16 -- nvmf/common.sh@445 -- # head -n 1 00:19:05.196 22:05:16 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:05.196 22:05:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:05.196 192.168.100.9' 00:19:05.196 22:05:16 -- nvmf/common.sh@446 -- # tail -n +2 00:19:05.196 22:05:16 -- nvmf/common.sh@446 -- # head -n 1 00:19:05.196 22:05:16 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:05.196 22:05:16 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:05.196 22:05:16 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:05.196 22:05:16 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:05.196 22:05:16 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:05.196 22:05:16 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:05.196 22:05:16 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:05.196 22:05:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:05.196 22:05:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:05.196 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:19:05.196 22:05:16 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:05.196 22:05:16 -- nvmf/common.sh@469 -- # nvmfpid=2197827 00:19:05.196 22:05:16 -- nvmf/common.sh@470 -- # waitforlisten 2197827 00:19:05.196 22:05:16 -- common/autotest_common.sh@819 -- # '[' -z 2197827 ']' 00:19:05.196 22:05:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.196 22:05:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:05.196 22:05:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.196 22:05:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:05.196 22:05:16 -- common/autotest_common.sh@10 -- # set +x 00:19:05.196 [2024-07-26 22:05:16.260359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:05.196 [2024-07-26 22:05:16.260408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.196 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.196 [2024-07-26 22:05:16.345750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.196 [2024-07-26 22:05:16.382030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:05.196 [2024-07-26 22:05:16.382140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.196 [2024-07-26 22:05:16.382149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.196 [2024-07-26 22:05:16.382158] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.196 [2024-07-26 22:05:16.382177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.134 22:05:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:06.134 22:05:17 -- common/autotest_common.sh@852 -- # return 0 00:19:06.134 22:05:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:06.134 22:05:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:06.134 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 22:05:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.134 22:05:17 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:06.134 22:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.134 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 [2024-07-26 22:05:17.132512] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x76c620/0x770b10) succeed. 00:19:06.134 [2024-07-26 22:05:17.141553] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x76db20/0x7b21a0) succeed. 00:19:06.134 22:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.134 22:05:17 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:06.134 22:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.134 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 Malloc0 00:19:06.134 22:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.134 22:05:17 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:06.134 22:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.134 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 22:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.134 22:05:17 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.134 22:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.134 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.134 22:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.134 22:05:17 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:06.134 22:05:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.134 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.135 [2024-07-26 22:05:17.233148] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:06.135 22:05:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.135 22:05:17 -- target/queue_depth.sh@30 -- # bdevperf_pid=2198043 00:19:06.135 22:05:17 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:06.135 22:05:17 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.135 22:05:17 -- target/queue_depth.sh@33 -- # waitforlisten 2198043 /var/tmp/bdevperf.sock 00:19:06.135 22:05:17 -- common/autotest_common.sh@819 -- # '[' -z 2198043 ']' 00:19:06.135 22:05:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.135 22:05:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:06.135 22:05:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.135 22:05:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:06.135 22:05:17 -- common/autotest_common.sh@10 -- # set +x 00:19:06.135 [2024-07-26 22:05:17.280544] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:06.135 [2024-07-26 22:05:17.280591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198043 ] 00:19:06.135 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.394 [2024-07-26 22:05:17.364966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.394 [2024-07-26 22:05:17.403412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.962 22:05:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:06.962 22:05:18 -- common/autotest_common.sh@852 -- # return 0 00:19:06.962 22:05:18 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:06.962 22:05:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.962 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:19:06.962 NVMe0n1 00:19:06.962 22:05:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.962 22:05:18 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.222 Running I/O for 10 seconds... 00:19:17.202 00:19:17.202 Latency(us) 00:19:17.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.202 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:17.202 Verification LBA range: start 0x0 length 0x4000 00:19:17.202 NVMe0n1 : 10.03 29589.06 115.58 0.00 0.00 34531.15 7811.89 26843.55 00:19:17.202 =================================================================================================================== 00:19:17.202 Total : 29589.06 115.58 0.00 0.00 34531.15 7811.89 26843.55 00:19:17.202 0 00:19:17.202 22:05:28 -- target/queue_depth.sh@39 -- # killprocess 2198043 00:19:17.202 22:05:28 -- common/autotest_common.sh@926 -- # '[' -z 2198043 ']' 00:19:17.202 22:05:28 -- common/autotest_common.sh@930 -- # kill -0 2198043 00:19:17.202 22:05:28 -- common/autotest_common.sh@931 -- # uname 00:19:17.202 22:05:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:17.202 22:05:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2198043 00:19:17.202 22:05:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:17.202 22:05:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:17.202 22:05:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2198043' 00:19:17.202 killing process with pid 2198043 00:19:17.202 22:05:28 -- common/autotest_common.sh@945 -- # kill 2198043 00:19:17.202 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.202 00:19:17.202 Latency(us) 00:19:17.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.202 =================================================================================================================== 00:19:17.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.202 22:05:28 -- common/autotest_common.sh@950 -- # wait 2198043 00:19:17.462 22:05:28 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:17.462 22:05:28 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:17.462 22:05:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:17.462 22:05:28 -- nvmf/common.sh@116 -- # sync 00:19:17.462 22:05:28 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:17.462 22:05:28 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:17.462 22:05:28 -- nvmf/common.sh@119 -- # set +e 00:19:17.462 22:05:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:17.462 22:05:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:17.462 rmmod nvme_rdma 00:19:17.462 rmmod nvme_fabrics 00:19:17.462 22:05:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:17.462 22:05:28 -- nvmf/common.sh@123 -- # set -e 00:19:17.462 22:05:28 -- nvmf/common.sh@124 -- # return 0 00:19:17.462 22:05:28 -- nvmf/common.sh@477 -- # '[' -n 2197827 ']' 00:19:17.462 22:05:28 -- nvmf/common.sh@478 -- # killprocess 2197827 00:19:17.462 22:05:28 -- common/autotest_common.sh@926 -- # '[' -z 2197827 ']' 00:19:17.462 22:05:28 -- common/autotest_common.sh@930 -- # kill -0 2197827 00:19:17.462 22:05:28 -- common/autotest_common.sh@931 -- # uname 00:19:17.462 22:05:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:17.462 22:05:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2197827 00:19:17.462 22:05:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:17.462 22:05:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:17.462 22:05:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2197827' 00:19:17.462 killing process with pid 2197827 00:19:17.462 22:05:28 -- common/autotest_common.sh@945 -- # kill 2197827 00:19:17.462 22:05:28 -- common/autotest_common.sh@950 -- # wait 2197827 00:19:17.722 22:05:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:17.722 22:05:28 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:17.722 00:19:17.722 real 0m21.027s 00:19:17.722 user 0m26.454s 00:19:17.722 sys 0m6.933s 00:19:17.722 22:05:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.722 22:05:28 -- common/autotest_common.sh@10 -- # set +x 00:19:17.722 ************************************ 00:19:17.722 END TEST nvmf_queue_depth 00:19:17.722 ************************************ 00:19:17.722 22:05:28 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:17.722 22:05:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:17.722 22:05:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:17.722 22:05:28 -- common/autotest_common.sh@10 -- # set +x 00:19:17.722 ************************************ 00:19:17.722 START TEST nvmf_multipath 00:19:17.722 ************************************ 00:19:17.722 22:05:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:17.981 * Looking for test storage... 00:19:17.981 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:17.981 22:05:29 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.981 22:05:29 -- nvmf/common.sh@7 -- # uname -s 00:19:17.981 22:05:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.981 22:05:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.981 22:05:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.981 22:05:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.981 22:05:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.981 22:05:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.981 22:05:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.981 22:05:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.981 22:05:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.981 22:05:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.981 22:05:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:17.981 22:05:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:17.981 22:05:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.981 22:05:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.981 22:05:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.981 22:05:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:17.981 22:05:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.981 22:05:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.982 22:05:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.982 22:05:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.982 22:05:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.982 22:05:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.982 22:05:29 -- paths/export.sh@5 -- # export PATH 00:19:17.982 22:05:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.982 22:05:29 -- nvmf/common.sh@46 -- # : 0 00:19:17.982 22:05:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:17.982 22:05:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:17.982 22:05:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:17.982 22:05:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.982 22:05:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.982 22:05:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:17.982 22:05:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:17.982 22:05:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:17.982 22:05:29 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.982 22:05:29 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.982 22:05:29 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:17.982 22:05:29 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:17.982 22:05:29 -- target/multipath.sh@43 -- # nvmftestinit 00:19:17.982 22:05:29 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:17.982 22:05:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.982 22:05:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:17.982 22:05:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:17.982 22:05:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:17.982 22:05:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.982 22:05:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.982 22:05:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.982 22:05:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:17.982 22:05:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:17.982 22:05:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:17.982 22:05:29 -- common/autotest_common.sh@10 -- # set +x 00:19:26.104 22:05:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:26.104 22:05:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:26.104 22:05:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:26.104 22:05:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:26.104 22:05:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:26.104 22:05:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:26.104 22:05:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:26.104 22:05:36 -- nvmf/common.sh@294 -- # net_devs=() 00:19:26.104 22:05:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:26.104 22:05:36 -- nvmf/common.sh@295 -- # e810=() 00:19:26.104 22:05:36 -- nvmf/common.sh@295 -- # local -ga e810 00:19:26.104 22:05:36 -- nvmf/common.sh@296 -- # x722=() 00:19:26.104 22:05:36 -- nvmf/common.sh@296 -- # local -ga x722 00:19:26.104 22:05:36 -- nvmf/common.sh@297 -- # mlx=() 00:19:26.104 22:05:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:26.104 22:05:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.104 22:05:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:26.104 22:05:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:26.104 22:05:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:26.104 22:05:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:26.104 22:05:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:26.104 22:05:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:26.104 22:05:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:26.104 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:26.104 22:05:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:26.104 22:05:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:26.104 22:05:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:26.104 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:26.104 22:05:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:26.104 22:05:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:26.104 22:05:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:26.104 22:05:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:26.104 22:05:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.104 22:05:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:26.104 22:05:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.105 22:05:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:26.105 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:26.105 22:05:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.105 22:05:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:26.105 22:05:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.105 22:05:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:26.105 22:05:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.105 22:05:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:26.105 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:26.105 22:05:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.105 22:05:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:26.105 22:05:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:26.105 22:05:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:26.105 22:05:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:26.105 22:05:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:26.105 22:05:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:26.105 22:05:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:26.105 22:05:36 -- nvmf/common.sh@57 -- # uname 00:19:26.105 22:05:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:26.105 22:05:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:26.105 22:05:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:26.105 22:05:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:26.105 22:05:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:26.105 22:05:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:26.105 22:05:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:26.105 22:05:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:26.105 22:05:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:26.105 22:05:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:26.105 22:05:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:26.105 22:05:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:26.105 22:05:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:26.105 22:05:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:26.105 22:05:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:26.105 22:05:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:26.105 22:05:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@104 -- # continue 2 00:19:26.105 22:05:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@104 -- # continue 2 00:19:26.105 22:05:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:26.105 22:05:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:26.105 22:05:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:26.105 22:05:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:26.105 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:26.105 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:26.105 altname enp217s0f0np0 00:19:26.105 altname ens818f0np0 00:19:26.105 inet 192.168.100.8/24 scope global mlx_0_0 00:19:26.105 valid_lft forever preferred_lft forever 00:19:26.105 22:05:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:26.105 22:05:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:26.105 22:05:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:26.105 22:05:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:26.105 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:26.105 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:26.105 altname enp217s0f1np1 00:19:26.105 altname ens818f1np1 00:19:26.105 inet 192.168.100.9/24 scope global mlx_0_1 00:19:26.105 valid_lft forever preferred_lft forever 00:19:26.105 22:05:37 -- nvmf/common.sh@410 -- # return 0 00:19:26.105 22:05:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:26.105 22:05:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:26.105 22:05:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:26.105 22:05:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:26.105 22:05:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:26.105 22:05:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:26.105 22:05:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:26.105 22:05:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:26.105 22:05:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@104 -- # continue 2 00:19:26.105 22:05:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:26.105 22:05:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:26.105 22:05:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@104 -- # continue 2 00:19:26.105 22:05:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:26.105 22:05:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:26.105 22:05:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:26.105 22:05:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:26.105 22:05:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:26.105 22:05:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:26.105 192.168.100.9' 00:19:26.105 22:05:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:26.105 192.168.100.9' 00:19:26.105 22:05:37 -- nvmf/common.sh@445 -- # head -n 1 00:19:26.105 22:05:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:26.105 22:05:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:26.105 192.168.100.9' 00:19:26.105 22:05:37 -- nvmf/common.sh@446 -- # tail -n +2 00:19:26.105 22:05:37 -- nvmf/common.sh@446 -- # head -n 1 00:19:26.105 22:05:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:26.105 22:05:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:26.105 22:05:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:26.105 22:05:37 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:26.105 22:05:37 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:26.105 22:05:37 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:26.105 run this test only with TCP transport for now 00:19:26.105 22:05:37 -- target/multipath.sh@53 -- # nvmftestfini 00:19:26.105 22:05:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.105 22:05:37 -- nvmf/common.sh@116 -- # sync 00:19:26.105 22:05:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@119 -- # set +e 00:19:26.105 22:05:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.105 22:05:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:26.105 rmmod nvme_rdma 00:19:26.105 rmmod nvme_fabrics 00:19:26.105 22:05:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.105 22:05:37 -- nvmf/common.sh@123 -- # set -e 00:19:26.105 22:05:37 -- nvmf/common.sh@124 -- # return 0 00:19:26.105 22:05:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:26.105 22:05:37 -- target/multipath.sh@54 -- # exit 0 00:19:26.105 22:05:37 -- target/multipath.sh@1 -- # nvmftestfini 00:19:26.105 22:05:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:26.105 22:05:37 -- nvmf/common.sh@116 -- # sync 00:19:26.105 22:05:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@119 -- # set +e 00:19:26.105 22:05:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:26.105 22:05:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:26.105 22:05:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:26.105 22:05:37 -- nvmf/common.sh@123 -- # set -e 00:19:26.105 22:05:37 -- nvmf/common.sh@124 -- # return 0 00:19:26.105 22:05:37 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:26.105 22:05:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:26.105 00:19:26.105 real 0m8.275s 00:19:26.106 user 0m2.365s 00:19:26.106 sys 0m6.127s 00:19:26.106 22:05:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:26.106 22:05:37 -- common/autotest_common.sh@10 -- # set +x 00:19:26.106 ************************************ 00:19:26.106 END TEST nvmf_multipath 00:19:26.106 ************************************ 00:19:26.106 22:05:37 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:26.106 22:05:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:26.106 22:05:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:26.106 22:05:37 -- common/autotest_common.sh@10 -- # set +x 00:19:26.106 ************************************ 00:19:26.106 START TEST nvmf_zcopy 00:19:26.106 ************************************ 00:19:26.106 22:05:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:26.366 * Looking for test storage... 00:19:26.366 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.366 22:05:37 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.366 22:05:37 -- nvmf/common.sh@7 -- # uname -s 00:19:26.366 22:05:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.366 22:05:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.366 22:05:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.366 22:05:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.366 22:05:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.366 22:05:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.366 22:05:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.366 22:05:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.366 22:05:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.366 22:05:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.366 22:05:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.366 22:05:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.366 22:05:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.366 22:05:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.366 22:05:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.366 22:05:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.366 22:05:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.366 22:05:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.366 22:05:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.366 22:05:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.366 22:05:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.366 22:05:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.366 22:05:37 -- paths/export.sh@5 -- # export PATH 00:19:26.366 22:05:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.366 22:05:37 -- nvmf/common.sh@46 -- # : 0 00:19:26.366 22:05:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:26.366 22:05:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:26.366 22:05:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:26.366 22:05:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.366 22:05:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.366 22:05:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:26.366 22:05:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:26.366 22:05:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:26.366 22:05:37 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:26.366 22:05:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:26.366 22:05:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.366 22:05:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:26.366 22:05:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:26.366 22:05:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:26.366 22:05:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.366 22:05:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.366 22:05:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.366 22:05:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:26.366 22:05:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:26.366 22:05:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:26.366 22:05:37 -- common/autotest_common.sh@10 -- # set +x 00:19:34.479 22:05:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:34.479 22:05:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:34.479 22:05:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:34.479 22:05:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:34.479 22:05:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:34.479 22:05:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:34.479 22:05:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:34.479 22:05:44 -- nvmf/common.sh@294 -- # net_devs=() 00:19:34.479 22:05:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:34.479 22:05:44 -- nvmf/common.sh@295 -- # e810=() 00:19:34.479 22:05:44 -- nvmf/common.sh@295 -- # local -ga e810 00:19:34.479 22:05:44 -- nvmf/common.sh@296 -- # x722=() 00:19:34.479 22:05:44 -- nvmf/common.sh@296 -- # local -ga x722 00:19:34.479 22:05:44 -- nvmf/common.sh@297 -- # mlx=() 00:19:34.479 22:05:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:34.479 22:05:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.479 22:05:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:34.479 22:05:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:34.479 22:05:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:34.479 22:05:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:34.479 22:05:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:34.479 22:05:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.479 22:05:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:34.479 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:34.479 22:05:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:34.479 22:05:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:34.480 22:05:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:34.480 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:34.480 22:05:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:34.480 22:05:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:34.480 22:05:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.480 22:05:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.480 22:05:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.480 22:05:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:34.480 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.480 22:05:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.480 22:05:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.480 22:05:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.480 22:05:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:34.480 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.480 22:05:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:34.480 22:05:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:34.480 22:05:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:34.480 22:05:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:34.480 22:05:44 -- nvmf/common.sh@57 -- # uname 00:19:34.480 22:05:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:34.480 22:05:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:34.480 22:05:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:34.480 22:05:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:34.480 22:05:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:34.480 22:05:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:34.480 22:05:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:34.480 22:05:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:34.480 22:05:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:34.480 22:05:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:34.480 22:05:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:34.480 22:05:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.480 22:05:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:34.480 22:05:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:34.480 22:05:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.480 22:05:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:34.480 22:05:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@104 -- # continue 2 00:19:34.480 22:05:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@104 -- # continue 2 00:19:34.480 22:05:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:34.480 22:05:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.480 22:05:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:34.480 22:05:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:34.480 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.480 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:34.480 altname enp217s0f0np0 00:19:34.480 altname ens818f0np0 00:19:34.480 inet 192.168.100.8/24 scope global mlx_0_0 00:19:34.480 valid_lft forever preferred_lft forever 00:19:34.480 22:05:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:34.480 22:05:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.480 22:05:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:34.480 22:05:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:34.480 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.480 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:34.480 altname enp217s0f1np1 00:19:34.480 altname ens818f1np1 00:19:34.480 inet 192.168.100.9/24 scope global mlx_0_1 00:19:34.480 valid_lft forever preferred_lft forever 00:19:34.480 22:05:44 -- nvmf/common.sh@410 -- # return 0 00:19:34.480 22:05:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:34.480 22:05:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:34.480 22:05:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:34.480 22:05:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:34.480 22:05:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.480 22:05:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:34.480 22:05:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:34.480 22:05:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.480 22:05:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:34.480 22:05:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@104 -- # continue 2 00:19:34.480 22:05:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.480 22:05:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.480 22:05:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@104 -- # continue 2 00:19:34.480 22:05:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:34.480 22:05:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.480 22:05:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:34.480 22:05:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:34.480 22:05:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:34.480 22:05:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:34.480 192.168.100.9' 00:19:34.480 22:05:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:34.480 192.168.100.9' 00:19:34.480 22:05:44 -- nvmf/common.sh@445 -- # head -n 1 00:19:34.480 22:05:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:34.480 22:05:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:34.480 192.168.100.9' 00:19:34.480 22:05:44 -- nvmf/common.sh@446 -- # tail -n +2 00:19:34.480 22:05:44 -- nvmf/common.sh@446 -- # head -n 1 00:19:34.480 22:05:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:34.480 22:05:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:34.480 22:05:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:34.480 22:05:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:34.480 22:05:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:34.480 22:05:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:34.480 22:05:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:34.480 22:05:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:34.480 22:05:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:34.480 22:05:44 -- common/autotest_common.sh@10 -- # set +x 00:19:34.480 22:05:45 -- nvmf/common.sh@469 -- # nvmfpid=2207837 00:19:34.480 22:05:45 -- nvmf/common.sh@470 -- # waitforlisten 2207837 00:19:34.480 22:05:45 -- common/autotest_common.sh@819 -- # '[' -z 2207837 ']' 00:19:34.480 22:05:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.480 22:05:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:34.480 22:05:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.480 22:05:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:34.480 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:19:34.481 22:05:45 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.481 [2024-07-26 22:05:45.048753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:34.481 [2024-07-26 22:05:45.048803] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.481 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.481 [2024-07-26 22:05:45.134034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.481 [2024-07-26 22:05:45.170781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:34.481 [2024-07-26 22:05:45.170902] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.481 [2024-07-26 22:05:45.170911] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.481 [2024-07-26 22:05:45.170920] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.481 [2024-07-26 22:05:45.170939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.741 22:05:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:34.741 22:05:45 -- common/autotest_common.sh@852 -- # return 0 00:19:34.741 22:05:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:34.741 22:05:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:34.741 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:19:34.741 22:05:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.741 22:05:45 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:34.741 22:05:45 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:34.741 Unsupported transport: rdma 00:19:34.741 22:05:45 -- target/zcopy.sh@17 -- # exit 0 00:19:34.741 22:05:45 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:34.741 22:05:45 -- common/autotest_common.sh@796 -- # type=--id 00:19:34.741 22:05:45 -- common/autotest_common.sh@797 -- # id=0 00:19:34.741 22:05:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:34.741 22:05:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:34.741 22:05:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:34.741 22:05:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:34.741 22:05:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:34.741 22:05:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:34.741 nvmf_trace.0 00:19:34.741 22:05:45 -- common/autotest_common.sh@811 -- # return 0 00:19:34.741 22:05:45 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:34.741 22:05:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:34.741 22:05:45 -- nvmf/common.sh@116 -- # sync 00:19:34.741 22:05:45 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:34.741 22:05:45 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:34.741 22:05:45 -- nvmf/common.sh@119 -- # set +e 00:19:34.741 22:05:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:34.741 22:05:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:34.741 rmmod nvme_rdma 00:19:34.741 rmmod nvme_fabrics 00:19:34.741 22:05:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:34.741 22:05:45 -- nvmf/common.sh@123 -- # set -e 00:19:34.741 22:05:45 -- nvmf/common.sh@124 -- # return 0 00:19:34.741 22:05:45 -- nvmf/common.sh@477 -- # '[' -n 2207837 ']' 00:19:34.741 22:05:45 -- nvmf/common.sh@478 -- # killprocess 2207837 00:19:34.741 22:05:45 -- common/autotest_common.sh@926 -- # '[' -z 2207837 ']' 00:19:34.741 22:05:45 -- common/autotest_common.sh@930 -- # kill -0 2207837 00:19:34.741 22:05:45 -- common/autotest_common.sh@931 -- # uname 00:19:34.741 22:05:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:34.741 22:05:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2207837 00:19:35.056 22:05:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:35.056 22:05:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:35.056 22:05:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2207837' 00:19:35.056 killing process with pid 2207837 00:19:35.056 22:05:45 -- common/autotest_common.sh@945 -- # kill 2207837 00:19:35.056 22:05:45 -- common/autotest_common.sh@950 -- # wait 2207837 00:19:35.056 22:05:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:35.056 22:05:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:35.056 00:19:35.056 real 0m8.889s 00:19:35.056 user 0m3.277s 00:19:35.056 sys 0m6.137s 00:19:35.056 22:05:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.056 22:05:46 -- common/autotest_common.sh@10 -- # set +x 00:19:35.056 ************************************ 00:19:35.056 END TEST nvmf_zcopy 00:19:35.056 ************************************ 00:19:35.056 22:05:46 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:35.056 22:05:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:35.056 22:05:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.056 22:05:46 -- common/autotest_common.sh@10 -- # set +x 00:19:35.056 ************************************ 00:19:35.056 START TEST nvmf_nmic 00:19:35.056 ************************************ 00:19:35.056 22:05:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:35.316 * Looking for test storage... 00:19:35.316 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:35.316 22:05:46 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.316 22:05:46 -- nvmf/common.sh@7 -- # uname -s 00:19:35.316 22:05:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.316 22:05:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.316 22:05:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.316 22:05:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.316 22:05:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.316 22:05:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.316 22:05:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.316 22:05:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.316 22:05:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.316 22:05:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.316 22:05:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:35.316 22:05:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:35.316 22:05:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.316 22:05:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.316 22:05:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.316 22:05:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:35.316 22:05:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.316 22:05:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.316 22:05:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.316 22:05:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.316 22:05:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.316 22:05:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.316 22:05:46 -- paths/export.sh@5 -- # export PATH 00:19:35.316 22:05:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.316 22:05:46 -- nvmf/common.sh@46 -- # : 0 00:19:35.316 22:05:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.316 22:05:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.316 22:05:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.316 22:05:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.316 22:05:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.316 22:05:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.316 22:05:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.316 22:05:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.316 22:05:46 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.316 22:05:46 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.316 22:05:46 -- target/nmic.sh@14 -- # nvmftestinit 00:19:35.316 22:05:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:35.316 22:05:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.316 22:05:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.316 22:05:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.316 22:05:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.316 22:05:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.316 22:05:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.316 22:05:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.316 22:05:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.316 22:05:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.316 22:05:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.316 22:05:46 -- common/autotest_common.sh@10 -- # set +x 00:19:43.435 22:05:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.435 22:05:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:43.435 22:05:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:43.435 22:05:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:43.435 22:05:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:43.435 22:05:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:43.435 22:05:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:43.435 22:05:54 -- nvmf/common.sh@294 -- # net_devs=() 00:19:43.435 22:05:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:43.435 22:05:54 -- nvmf/common.sh@295 -- # e810=() 00:19:43.435 22:05:54 -- nvmf/common.sh@295 -- # local -ga e810 00:19:43.435 22:05:54 -- nvmf/common.sh@296 -- # x722=() 00:19:43.435 22:05:54 -- nvmf/common.sh@296 -- # local -ga x722 00:19:43.435 22:05:54 -- nvmf/common.sh@297 -- # mlx=() 00:19:43.435 22:05:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:43.435 22:05:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.435 22:05:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:43.435 22:05:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:43.435 22:05:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:43.435 22:05:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:43.435 22:05:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:43.435 22:05:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:43.435 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:43.435 22:05:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:43.435 22:05:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:43.435 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:43.435 22:05:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:43.435 22:05:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:43.435 22:05:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.435 22:05:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.435 22:05:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.435 22:05:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:43.435 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:43.435 22:05:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.435 22:05:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.435 22:05:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.435 22:05:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.435 22:05:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:43.435 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:43.435 22:05:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.435 22:05:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:43.435 22:05:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:43.435 22:05:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:43.435 22:05:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:43.435 22:05:54 -- nvmf/common.sh@57 -- # uname 00:19:43.435 22:05:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:43.435 22:05:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:43.435 22:05:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:43.435 22:05:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:43.435 22:05:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:43.435 22:05:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:43.435 22:05:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:43.435 22:05:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:43.435 22:05:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:43.435 22:05:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:43.435 22:05:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:43.435 22:05:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:43.435 22:05:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:43.435 22:05:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:43.435 22:05:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:43.435 22:05:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:43.435 22:05:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:43.435 22:05:54 -- nvmf/common.sh@104 -- # continue 2 00:19:43.435 22:05:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.435 22:05:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:43.435 22:05:54 -- nvmf/common.sh@104 -- # continue 2 00:19:43.435 22:05:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:43.435 22:05:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:43.435 22:05:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:43.435 22:05:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:43.435 22:05:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:43.435 22:05:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:43.435 22:05:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:43.435 22:05:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:43.435 22:05:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:43.435 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:43.435 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:43.435 altname enp217s0f0np0 00:19:43.435 altname ens818f0np0 00:19:43.435 inet 192.168.100.8/24 scope global mlx_0_0 00:19:43.435 valid_lft forever preferred_lft forever 00:19:43.435 22:05:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:43.435 22:05:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:43.435 22:05:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:43.435 22:05:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:43.435 22:05:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:43.435 22:05:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:43.435 22:05:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:43.435 22:05:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:43.436 22:05:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:43.436 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:43.436 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:43.436 altname enp217s0f1np1 00:19:43.436 altname ens818f1np1 00:19:43.436 inet 192.168.100.9/24 scope global mlx_0_1 00:19:43.436 valid_lft forever preferred_lft forever 00:19:43.436 22:05:54 -- nvmf/common.sh@410 -- # return 0 00:19:43.436 22:05:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.436 22:05:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:43.436 22:05:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:43.436 22:05:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:43.436 22:05:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:43.436 22:05:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:43.436 22:05:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:43.436 22:05:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:43.436 22:05:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:43.436 22:05:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:43.436 22:05:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:43.436 22:05:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.436 22:05:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:43.436 22:05:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:43.436 22:05:54 -- nvmf/common.sh@104 -- # continue 2 00:19:43.436 22:05:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:43.436 22:05:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.436 22:05:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:43.436 22:05:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:43.436 22:05:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:43.436 22:05:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:43.436 22:05:54 -- nvmf/common.sh@104 -- # continue 2 00:19:43.436 22:05:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:43.436 22:05:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:43.436 22:05:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:43.436 22:05:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:43.436 22:05:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:43.436 22:05:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:43.436 22:05:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:43.436 22:05:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:43.436 22:05:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:43.436 22:05:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:43.436 22:05:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:43.436 22:05:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:43.436 22:05:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:43.436 192.168.100.9' 00:19:43.436 22:05:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:43.436 192.168.100.9' 00:19:43.436 22:05:54 -- nvmf/common.sh@445 -- # head -n 1 00:19:43.436 22:05:54 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:43.436 22:05:54 -- nvmf/common.sh@446 -- # head -n 1 00:19:43.436 22:05:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:43.436 192.168.100.9' 00:19:43.436 22:05:54 -- nvmf/common.sh@446 -- # tail -n +2 00:19:43.436 22:05:54 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:43.436 22:05:54 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:43.436 22:05:54 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:43.436 22:05:54 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:43.436 22:05:54 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:43.436 22:05:54 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:43.436 22:05:54 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:43.436 22:05:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.436 22:05:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.436 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:19:43.436 22:05:54 -- nvmf/common.sh@469 -- # nvmfpid=2212062 00:19:43.436 22:05:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.436 22:05:54 -- nvmf/common.sh@470 -- # waitforlisten 2212062 00:19:43.436 22:05:54 -- common/autotest_common.sh@819 -- # '[' -z 2212062 ']' 00:19:43.436 22:05:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.436 22:05:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.436 22:05:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.436 22:05:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.436 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:19:43.436 [2024-07-26 22:05:54.547255] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:43.436 [2024-07-26 22:05:54.547304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.436 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.436 [2024-07-26 22:05:54.633134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.695 [2024-07-26 22:05:54.673766] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:43.695 [2024-07-26 22:05:54.673868] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.695 [2024-07-26 22:05:54.673878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.695 [2024-07-26 22:05:54.673887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.695 [2024-07-26 22:05:54.673935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.695 [2024-07-26 22:05:54.674029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.695 [2024-07-26 22:05:54.674111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.695 [2024-07-26 22:05:54.674113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.261 22:05:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.261 22:05:55 -- common/autotest_common.sh@852 -- # return 0 00:19:44.261 22:05:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.261 22:05:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:44.261 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.261 22:05:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.261 22:05:55 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:44.261 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.261 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.261 [2024-07-26 22:05:55.430795] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15c04b0/0x15c49a0) succeed. 00:19:44.261 [2024-07-26 22:05:55.440996] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15c1aa0/0x1606030) succeed. 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 Malloc0 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 [2024-07-26 22:05:55.608662] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:44.520 test case1: single bdev can't be used in multiple subsystems 00:19:44.520 22:05:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@28 -- # nmic_status=0 00:19:44.520 22:05:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 [2024-07-26 22:05:55.632428] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:44.520 [2024-07-26 22:05:55.632450] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:44.520 [2024-07-26 22:05:55.632460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:44.520 request: 00:19:44.520 { 00:19:44.520 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:44.520 "namespace": { 00:19:44.520 "bdev_name": "Malloc0" 00:19:44.520 }, 00:19:44.520 "method": "nvmf_subsystem_add_ns", 00:19:44.520 "req_id": 1 00:19:44.520 } 00:19:44.520 Got JSON-RPC error response 00:19:44.520 response: 00:19:44.520 { 00:19:44.520 "code": -32602, 00:19:44.520 "message": "Invalid parameters" 00:19:44.520 } 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@29 -- # nmic_status=1 00:19:44.520 22:05:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:44.520 22:05:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:44.520 Adding namespace failed - expected result. 00:19:44.520 22:05:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:44.520 test case2: host connect to nvmf target in multiple paths 00:19:44.520 22:05:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:44.520 22:05:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.520 22:05:55 -- common/autotest_common.sh@10 -- # set +x 00:19:44.520 [2024-07-26 22:05:55.644491] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:44.520 22:05:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.520 22:05:55 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:45.455 22:05:56 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:19:46.388 22:05:57 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:46.388 22:05:57 -- common/autotest_common.sh@1177 -- # local i=0 00:19:46.388 22:05:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.388 22:05:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:46.388 22:05:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:48.918 22:05:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:48.918 22:05:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:48.918 22:05:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:48.918 22:05:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:48.918 22:05:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:48.918 22:05:59 -- common/autotest_common.sh@1187 -- # return 0 00:19:48.918 22:05:59 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:48.918 [global] 00:19:48.918 thread=1 00:19:48.918 invalidate=1 00:19:48.918 rw=write 00:19:48.918 time_based=1 00:19:48.918 runtime=1 00:19:48.918 ioengine=libaio 00:19:48.918 direct=1 00:19:48.918 bs=4096 00:19:48.918 iodepth=1 00:19:48.918 norandommap=0 00:19:48.918 numjobs=1 00:19:48.918 00:19:48.918 verify_dump=1 00:19:48.918 verify_backlog=512 00:19:48.918 verify_state_save=0 00:19:48.918 do_verify=1 00:19:48.918 verify=crc32c-intel 00:19:48.918 [job0] 00:19:48.918 filename=/dev/nvme0n1 00:19:48.918 Could not set queue depth (nvme0n1) 00:19:48.918 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:48.918 fio-3.35 00:19:48.918 Starting 1 thread 00:19:50.291 00:19:50.291 job0: (groupid=0, jobs=1): err= 0: pid=2213086: Fri Jul 26 22:06:01 2024 00:19:50.291 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:19:50.291 slat (nsec): min=8240, max=37275, avg=9054.06, stdev=1029.92 00:19:50.291 clat (usec): min=24, max=136, avg=57.14, stdev= 3.65 00:19:50.291 lat (usec): min=55, max=145, avg=66.19, stdev= 3.73 00:19:50.291 clat percentiles (usec): 00:19:50.291 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 55], 00:19:50.291 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:19:50.291 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 64], 00:19:50.292 | 99.00th=[ 67], 99.50th=[ 68], 99.90th=[ 73], 99.95th=[ 83], 00:19:50.292 | 99.99th=[ 137] 00:19:50.292 write: IOPS=7531, BW=29.4MiB/s (30.8MB/s)(29.4MiB/1001msec); 0 zone resets 00:19:50.292 slat (nsec): min=8012, max=39418, avg=10653.75, stdev=1157.84 00:19:50.292 clat (usec): min=22, max=172, avg=55.28, stdev= 3.95 00:19:50.292 lat (usec): min=54, max=183, avg=65.94, stdev= 4.11 00:19:50.292 clat percentiles (usec): 00:19:50.292 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 52], 00:19:50.292 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:19:50.292 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 60], 95.00th=[ 62], 00:19:50.292 | 99.00th=[ 65], 99.50th=[ 67], 99.90th=[ 73], 99.95th=[ 78], 00:19:50.292 | 99.99th=[ 174] 00:19:50.292 bw ( KiB/s): min=30256, max=30256, per=100.00%, avg=30256.00, stdev= 0.00, samples=1 00:19:50.292 iops : min= 7564, max= 7564, avg=7564.00, stdev= 0.00, samples=1 00:19:50.292 lat (usec) : 50=3.36%, 100=96.62%, 250=0.02% 00:19:50.292 cpu : usr=10.50%, sys=19.10%, ctx=14707, majf=0, minf=2 00:19:50.292 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:50.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:50.292 issued rwts: total=7168,7539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:50.292 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:50.292 00:19:50.292 Run status group 0 (all jobs): 00:19:50.292 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:19:50.292 WRITE: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=29.4MiB (30.9MB), run=1001-1001msec 00:19:50.292 00:19:50.292 Disk stats (read/write): 00:19:50.292 nvme0n1: ios=6609/6656, merge=0/0, ticks=330/334, in_queue=664, util=90.48% 00:19:50.292 22:06:01 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:52.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:52.189 22:06:02 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:52.189 22:06:02 -- common/autotest_common.sh@1198 -- # local i=0 00:19:52.189 22:06:02 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:52.189 22:06:02 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.189 22:06:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:52.189 22:06:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:52.189 22:06:03 -- common/autotest_common.sh@1210 -- # return 0 00:19:52.189 22:06:03 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:52.189 22:06:03 -- target/nmic.sh@53 -- # nvmftestfini 00:19:52.189 22:06:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.189 22:06:03 -- nvmf/common.sh@116 -- # sync 00:19:52.189 22:06:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:52.189 22:06:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:52.189 22:06:03 -- nvmf/common.sh@119 -- # set +e 00:19:52.189 22:06:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.189 22:06:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:52.189 rmmod nvme_rdma 00:19:52.189 rmmod nvme_fabrics 00:19:52.189 22:06:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.189 22:06:03 -- nvmf/common.sh@123 -- # set -e 00:19:52.189 22:06:03 -- nvmf/common.sh@124 -- # return 0 00:19:52.189 22:06:03 -- nvmf/common.sh@477 -- # '[' -n 2212062 ']' 00:19:52.189 22:06:03 -- nvmf/common.sh@478 -- # killprocess 2212062 00:19:52.189 22:06:03 -- common/autotest_common.sh@926 -- # '[' -z 2212062 ']' 00:19:52.189 22:06:03 -- common/autotest_common.sh@930 -- # kill -0 2212062 00:19:52.189 22:06:03 -- common/autotest_common.sh@931 -- # uname 00:19:52.189 22:06:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.189 22:06:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2212062 00:19:52.189 22:06:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:52.189 22:06:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:52.189 22:06:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2212062' 00:19:52.189 killing process with pid 2212062 00:19:52.189 22:06:03 -- common/autotest_common.sh@945 -- # kill 2212062 00:19:52.189 22:06:03 -- common/autotest_common.sh@950 -- # wait 2212062 00:19:52.189 22:06:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:52.189 22:06:03 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:52.189 00:19:52.189 real 0m17.199s 00:19:52.189 user 0m45.777s 00:19:52.189 sys 0m7.196s 00:19:52.189 22:06:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.189 22:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:52.189 ************************************ 00:19:52.189 END TEST nvmf_nmic 00:19:52.189 ************************************ 00:19:52.447 22:06:03 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:52.447 22:06:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:52.447 22:06:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:52.447 22:06:03 -- common/autotest_common.sh@10 -- # set +x 00:19:52.447 ************************************ 00:19:52.447 START TEST nvmf_fio_target 00:19:52.447 ************************************ 00:19:52.447 22:06:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:19:52.447 * Looking for test storage... 00:19:52.447 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:52.447 22:06:03 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:52.447 22:06:03 -- nvmf/common.sh@7 -- # uname -s 00:19:52.447 22:06:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:52.447 22:06:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:52.447 22:06:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:52.447 22:06:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:52.447 22:06:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:52.447 22:06:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:52.447 22:06:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:52.447 22:06:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:52.447 22:06:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:52.447 22:06:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:52.447 22:06:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:52.447 22:06:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:52.447 22:06:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:52.447 22:06:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:52.447 22:06:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:52.448 22:06:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:52.448 22:06:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:52.448 22:06:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:52.448 22:06:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:52.448 22:06:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.448 22:06:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.448 22:06:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.448 22:06:03 -- paths/export.sh@5 -- # export PATH 00:19:52.448 22:06:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:52.448 22:06:03 -- nvmf/common.sh@46 -- # : 0 00:19:52.448 22:06:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:52.448 22:06:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:52.448 22:06:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:52.448 22:06:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:52.448 22:06:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:52.448 22:06:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:52.448 22:06:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:52.448 22:06:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:52.448 22:06:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:52.448 22:06:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:52.448 22:06:03 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:52.448 22:06:03 -- target/fio.sh@16 -- # nvmftestinit 00:19:52.448 22:06:03 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:52.448 22:06:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.448 22:06:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:52.448 22:06:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:52.448 22:06:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:52.448 22:06:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.448 22:06:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.448 22:06:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.448 22:06:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:52.448 22:06:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:52.448 22:06:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:52.448 22:06:03 -- common/autotest_common.sh@10 -- # set +x 00:20:00.558 22:06:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.558 22:06:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:00.558 22:06:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:00.558 22:06:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:00.558 22:06:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:00.558 22:06:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:00.558 22:06:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:00.558 22:06:11 -- nvmf/common.sh@294 -- # net_devs=() 00:20:00.558 22:06:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:00.558 22:06:11 -- nvmf/common.sh@295 -- # e810=() 00:20:00.558 22:06:11 -- nvmf/common.sh@295 -- # local -ga e810 00:20:00.558 22:06:11 -- nvmf/common.sh@296 -- # x722=() 00:20:00.558 22:06:11 -- nvmf/common.sh@296 -- # local -ga x722 00:20:00.558 22:06:11 -- nvmf/common.sh@297 -- # mlx=() 00:20:00.558 22:06:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:00.558 22:06:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.558 22:06:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:00.558 22:06:11 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:00.558 22:06:11 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:00.558 22:06:11 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:00.558 22:06:11 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:00.558 22:06:11 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:00.558 22:06:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:00.558 22:06:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.558 22:06:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:00.558 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:00.559 22:06:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.559 22:06:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:00.559 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:00.559 22:06:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.559 22:06:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:00.559 22:06:11 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.559 22:06:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.559 22:06:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.559 22:06:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:00.559 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.559 22:06:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.559 22:06:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.559 22:06:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.559 22:06:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:00.559 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.559 22:06:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:00.559 22:06:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:00.559 22:06:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:00.559 22:06:11 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:00.559 22:06:11 -- nvmf/common.sh@57 -- # uname 00:20:00.559 22:06:11 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:00.559 22:06:11 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:00.559 22:06:11 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:00.559 22:06:11 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:00.559 22:06:11 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:00.559 22:06:11 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:00.559 22:06:11 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:00.559 22:06:11 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:00.559 22:06:11 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:00.559 22:06:11 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:00.559 22:06:11 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:00.559 22:06:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.559 22:06:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:00.559 22:06:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:00.559 22:06:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.559 22:06:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:00.559 22:06:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@104 -- # continue 2 00:20:00.559 22:06:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@104 -- # continue 2 00:20:00.559 22:06:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:00.559 22:06:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:00.559 22:06:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:00.559 22:06:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:00.559 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.559 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:00.559 altname enp217s0f0np0 00:20:00.559 altname ens818f0np0 00:20:00.559 inet 192.168.100.8/24 scope global mlx_0_0 00:20:00.559 valid_lft forever preferred_lft forever 00:20:00.559 22:06:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:00.559 22:06:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:00.559 22:06:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:00.559 22:06:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:00.559 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.559 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:00.559 altname enp217s0f1np1 00:20:00.559 altname ens818f1np1 00:20:00.559 inet 192.168.100.9/24 scope global mlx_0_1 00:20:00.559 valid_lft forever preferred_lft forever 00:20:00.559 22:06:11 -- nvmf/common.sh@410 -- # return 0 00:20:00.559 22:06:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.559 22:06:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:00.559 22:06:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:00.559 22:06:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:00.559 22:06:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.559 22:06:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:00.559 22:06:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:00.559 22:06:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.559 22:06:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:00.559 22:06:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@104 -- # continue 2 00:20:00.559 22:06:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.559 22:06:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.559 22:06:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@104 -- # continue 2 00:20:00.559 22:06:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:00.559 22:06:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:00.559 22:06:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:00.559 22:06:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:00.559 22:06:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:00.559 22:06:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:00.559 192.168.100.9' 00:20:00.559 22:06:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:00.559 192.168.100.9' 00:20:00.559 22:06:11 -- nvmf/common.sh@445 -- # head -n 1 00:20:00.559 22:06:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:00.559 22:06:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:00.559 192.168.100.9' 00:20:00.559 22:06:11 -- nvmf/common.sh@446 -- # tail -n +2 00:20:00.559 22:06:11 -- nvmf/common.sh@446 -- # head -n 1 00:20:00.559 22:06:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:00.559 22:06:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:00.559 22:06:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:00.559 22:06:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:00.559 22:06:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:00.559 22:06:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:00.559 22:06:11 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:00.559 22:06:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.559 22:06:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.559 22:06:11 -- common/autotest_common.sh@10 -- # set +x 00:20:00.560 22:06:11 -- nvmf/common.sh@469 -- # nvmfpid=2218323 00:20:00.560 22:06:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.560 22:06:11 -- nvmf/common.sh@470 -- # waitforlisten 2218323 00:20:00.560 22:06:11 -- common/autotest_common.sh@819 -- # '[' -z 2218323 ']' 00:20:00.560 22:06:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.560 22:06:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.560 22:06:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.560 22:06:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.560 22:06:11 -- common/autotest_common.sh@10 -- # set +x 00:20:00.560 [2024-07-26 22:06:11.523384] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:00.560 [2024-07-26 22:06:11.523434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.560 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.560 [2024-07-26 22:06:11.604455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.560 [2024-07-26 22:06:11.640931] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.560 [2024-07-26 22:06:11.641035] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.560 [2024-07-26 22:06:11.641045] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.560 [2024-07-26 22:06:11.641057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.560 [2024-07-26 22:06:11.641110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.560 [2024-07-26 22:06:11.641208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.560 [2024-07-26 22:06:11.641292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.560 [2024-07-26 22:06:11.641294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.126 22:06:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.126 22:06:12 -- common/autotest_common.sh@852 -- # return 0 00:20:01.126 22:06:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.126 22:06:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:01.126 22:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:01.384 22:06:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.385 22:06:12 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:01.385 [2024-07-26 22:06:12.550265] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b4d4b0/0x1b519a0) succeed. 00:20:01.385 [2024-07-26 22:06:12.560723] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b4eaa0/0x1b93030) succeed. 00:20:01.674 22:06:12 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.674 22:06:12 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:01.933 22:06:12 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:01.933 22:06:13 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:01.933 22:06:13 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:02.194 22:06:13 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:02.194 22:06:13 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:02.456 22:06:13 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:02.456 22:06:13 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:02.457 22:06:13 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:02.716 22:06:13 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:02.716 22:06:13 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:02.975 22:06:14 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:02.975 22:06:14 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:03.234 22:06:14 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:03.234 22:06:14 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:03.234 22:06:14 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:03.493 22:06:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:03.493 22:06:14 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:03.752 22:06:14 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:03.752 22:06:14 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:03.752 22:06:14 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:04.011 [2024-07-26 22:06:15.098403] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:04.011 22:06:15 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:04.270 22:06:15 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:04.270 22:06:15 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:05.650 22:06:16 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:05.650 22:06:16 -- common/autotest_common.sh@1177 -- # local i=0 00:20:05.650 22:06:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:05.650 22:06:16 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:20:05.650 22:06:16 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:20:05.650 22:06:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:07.580 22:06:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:07.580 22:06:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:07.580 22:06:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:20:07.580 22:06:18 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:20:07.580 22:06:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:07.580 22:06:18 -- common/autotest_common.sh@1187 -- # return 0 00:20:07.580 22:06:18 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:07.580 [global] 00:20:07.580 thread=1 00:20:07.580 invalidate=1 00:20:07.580 rw=write 00:20:07.580 time_based=1 00:20:07.580 runtime=1 00:20:07.580 ioengine=libaio 00:20:07.580 direct=1 00:20:07.580 bs=4096 00:20:07.580 iodepth=1 00:20:07.580 norandommap=0 00:20:07.580 numjobs=1 00:20:07.580 00:20:07.580 verify_dump=1 00:20:07.580 verify_backlog=512 00:20:07.580 verify_state_save=0 00:20:07.580 do_verify=1 00:20:07.580 verify=crc32c-intel 00:20:07.580 [job0] 00:20:07.580 filename=/dev/nvme0n1 00:20:07.580 [job1] 00:20:07.580 filename=/dev/nvme0n2 00:20:07.580 [job2] 00:20:07.580 filename=/dev/nvme0n3 00:20:07.580 [job3] 00:20:07.580 filename=/dev/nvme0n4 00:20:07.580 Could not set queue depth (nvme0n1) 00:20:07.580 Could not set queue depth (nvme0n2) 00:20:07.580 Could not set queue depth (nvme0n3) 00:20:07.580 Could not set queue depth (nvme0n4) 00:20:07.852 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:07.852 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:07.852 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:07.852 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:07.852 fio-3.35 00:20:07.852 Starting 4 threads 00:20:09.253 00:20:09.253 job0: (groupid=0, jobs=1): err= 0: pid=2219652: Fri Jul 26 22:06:20 2024 00:20:09.253 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:20:09.253 slat (nsec): min=8232, max=32544, avg=8946.04, stdev=803.54 00:20:09.253 clat (usec): min=61, max=200, avg=82.98, stdev=16.71 00:20:09.253 lat (usec): min=71, max=209, avg=91.92, stdev=16.73 00:20:09.253 clat percentiles (usec): 00:20:09.253 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:20:09.253 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 80], 00:20:09.253 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 125], 00:20:09.253 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 176], 99.95th=[ 192], 00:20:09.253 | 99.99th=[ 200] 00:20:09.253 write: IOPS=5507, BW=21.5MiB/s (22.6MB/s)(21.5MiB/1001msec); 0 zone resets 00:20:09.253 slat (nsec): min=7980, max=61782, avg=10746.36, stdev=1211.80 00:20:09.253 clat (usec): min=60, max=287, avg=81.51, stdev=19.39 00:20:09.253 lat (usec): min=71, max=302, avg=92.26, stdev=19.61 00:20:09.253 clat percentiles (usec): 00:20:09.253 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 70], 20.00th=[ 72], 00:20:09.253 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 76], 60.00th=[ 78], 00:20:09.253 | 70.00th=[ 80], 80.00th=[ 83], 90.00th=[ 111], 95.00th=[ 133], 00:20:09.253 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 198], 99.95th=[ 204], 00:20:09.253 | 99.99th=[ 289] 00:20:09.253 bw ( KiB/s): min=24576, max=24576, per=32.67%, avg=24576.00, stdev= 0.00, samples=1 00:20:09.253 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:20:09.253 lat (usec) : 100=88.86%, 250=11.13%, 500=0.02% 00:20:09.253 cpu : usr=8.80%, sys=12.80%, ctx=10634, majf=0, minf=1 00:20:09.253 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.253 issued rwts: total=5120,5513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.253 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.253 job1: (groupid=0, jobs=1): err= 0: pid=2219661: Fri Jul 26 22:06:20 2024 00:20:09.254 read: IOPS=4061, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1001msec) 00:20:09.254 slat (nsec): min=8284, max=31168, avg=9069.98, stdev=840.58 00:20:09.254 clat (usec): min=64, max=257, avg=113.40, stdev=18.51 00:20:09.254 lat (usec): min=73, max=265, avg=122.47, stdev=18.57 00:20:09.254 clat percentiles (usec): 00:20:09.254 | 1.00th=[ 69], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 106], 00:20:09.254 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], 00:20:09.254 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 130], 95.00th=[ 135], 00:20:09.254 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:20:09.254 | 99.99th=[ 258] 00:20:09.254 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:09.254 slat (nsec): min=10104, max=38072, avg=10903.75, stdev=1006.47 00:20:09.254 clat (usec): min=59, max=175, avg=107.35, stdev=17.16 00:20:09.254 lat (usec): min=69, max=186, avg=118.26, stdev=17.20 00:20:09.254 clat percentiles (usec): 00:20:09.254 | 1.00th=[ 67], 5.00th=[ 72], 10.00th=[ 76], 20.00th=[ 99], 00:20:09.254 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 114], 00:20:09.254 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 124], 95.00th=[ 129], 00:20:09.254 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 174], 00:20:09.254 | 99.99th=[ 176] 00:20:09.254 bw ( KiB/s): min=16384, max=16384, per=21.78%, avg=16384.00, stdev= 0.00, samples=1 00:20:09.254 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:09.254 lat (usec) : 100=18.72%, 250=81.27%, 500=0.01% 00:20:09.254 cpu : usr=4.60%, sys=12.30%, ctx=8162, majf=0, minf=2 00:20:09.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.254 issued rwts: total=4066,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.254 job2: (groupid=0, jobs=1): err= 0: pid=2219674: Fri Jul 26 22:06:20 2024 00:20:09.254 read: IOPS=3595, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1001msec) 00:20:09.254 slat (nsec): min=8441, max=33822, avg=9190.46, stdev=917.09 00:20:09.254 clat (usec): min=73, max=189, avg=120.13, stdev=10.46 00:20:09.254 lat (usec): min=82, max=199, avg=129.32, stdev=10.47 00:20:09.254 clat percentiles (usec): 00:20:09.254 | 1.00th=[ 97], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 114], 00:20:09.254 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:20:09.254 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 137], 00:20:09.254 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 178], 99.95th=[ 186], 00:20:09.254 | 99.99th=[ 190] 00:20:09.254 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:09.254 slat (nsec): min=10302, max=41640, avg=11195.20, stdev=1305.75 00:20:09.254 clat (usec): min=71, max=264, avg=115.31, stdev=13.62 00:20:09.254 lat (usec): min=83, max=278, avg=126.51, stdev=13.75 00:20:09.254 clat percentiles (usec): 00:20:09.254 | 1.00th=[ 87], 5.00th=[ 98], 10.00th=[ 102], 20.00th=[ 106], 00:20:09.254 | 30.00th=[ 109], 40.00th=[ 112], 50.00th=[ 114], 60.00th=[ 116], 00:20:09.254 | 70.00th=[ 119], 80.00th=[ 124], 90.00th=[ 135], 95.00th=[ 143], 00:20:09.254 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 202], 00:20:09.254 | 99.99th=[ 265] 00:20:09.254 bw ( KiB/s): min=16384, max=16384, per=21.78%, avg=16384.00, stdev= 0.00, samples=1 00:20:09.254 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:09.254 lat (usec) : 100=4.64%, 250=95.35%, 500=0.01% 00:20:09.254 cpu : usr=5.50%, sys=10.70%, ctx=7695, majf=0, minf=1 00:20:09.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.254 issued rwts: total=3599,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.254 job3: (groupid=0, jobs=1): err= 0: pid=2219680: Fri Jul 26 22:06:20 2024 00:20:09.254 read: IOPS=4642, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1001msec) 00:20:09.254 slat (nsec): min=8381, max=21761, avg=9047.26, stdev=815.18 00:20:09.254 clat (usec): min=73, max=224, avg=91.40, stdev=13.58 00:20:09.254 lat (usec): min=81, max=233, avg=100.45, stdev=13.63 00:20:09.254 clat percentiles (usec): 00:20:09.254 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 83], 00:20:09.254 | 30.00th=[ 85], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 90], 00:20:09.254 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 120], 00:20:09.254 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 165], 99.95th=[ 174], 00:20:09.254 | 99.99th=[ 225] 00:20:09.254 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:20:09.254 slat (nsec): min=10192, max=37829, avg=10962.79, stdev=1127.74 00:20:09.254 clat (usec): min=64, max=259, avg=89.49, stdev=18.16 00:20:09.254 lat (usec): min=79, max=271, avg=100.45, stdev=18.25 00:20:09.254 clat percentiles (usec): 00:20:09.254 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 79], 00:20:09.254 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 84], 60.00th=[ 86], 00:20:09.254 | 70.00th=[ 88], 80.00th=[ 92], 90.00th=[ 115], 95.00th=[ 137], 00:20:09.254 | 99.00th=[ 151], 99.50th=[ 161], 99.90th=[ 184], 99.95th=[ 196], 00:20:09.254 | 99.99th=[ 260] 00:20:09.254 bw ( KiB/s): min=20656, max=20656, per=27.46%, avg=20656.00, stdev= 0.00, samples=1 00:20:09.254 iops : min= 5164, max= 5164, avg=5164.00, stdev= 0.00, samples=1 00:20:09.254 lat (usec) : 100=86.11%, 250=13.88%, 500=0.01% 00:20:09.254 cpu : usr=6.90%, sys=13.10%, ctx=9767, majf=0, minf=1 00:20:09.254 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.254 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.254 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.254 issued rwts: total=4647,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.254 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.254 00:20:09.254 Run status group 0 (all jobs): 00:20:09.254 READ: bw=68.0MiB/s (71.3MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=68.1MiB (71.4MB), run=1001-1001msec 00:20:09.254 WRITE: bw=73.5MiB/s (77.0MB/s), 16.0MiB/s-21.5MiB/s (16.8MB/s-22.6MB/s), io=73.5MiB (77.1MB), run=1001-1001msec 00:20:09.254 00:20:09.254 Disk stats (read/write): 00:20:09.254 nvme0n1: ios=4657/4822, merge=0/0, ticks=346/335, in_queue=681, util=84.27% 00:20:09.254 nvme0n2: ios=3072/3416, merge=0/0, ticks=337/375, in_queue=712, util=85.09% 00:20:09.254 nvme0n3: ios=3072/3416, merge=0/0, ticks=346/341, in_queue=687, util=88.34% 00:20:09.254 nvme0n4: ios=4096/4459, merge=0/0, ticks=326/326, in_queue=652, util=89.47% 00:20:09.254 22:06:20 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:09.254 [global] 00:20:09.254 thread=1 00:20:09.254 invalidate=1 00:20:09.254 rw=randwrite 00:20:09.254 time_based=1 00:20:09.254 runtime=1 00:20:09.254 ioengine=libaio 00:20:09.254 direct=1 00:20:09.254 bs=4096 00:20:09.254 iodepth=1 00:20:09.254 norandommap=0 00:20:09.254 numjobs=1 00:20:09.254 00:20:09.254 verify_dump=1 00:20:09.254 verify_backlog=512 00:20:09.254 verify_state_save=0 00:20:09.254 do_verify=1 00:20:09.254 verify=crc32c-intel 00:20:09.254 [job0] 00:20:09.254 filename=/dev/nvme0n1 00:20:09.254 [job1] 00:20:09.254 filename=/dev/nvme0n2 00:20:09.254 [job2] 00:20:09.254 filename=/dev/nvme0n3 00:20:09.254 [job3] 00:20:09.254 filename=/dev/nvme0n4 00:20:09.254 Could not set queue depth (nvme0n1) 00:20:09.254 Could not set queue depth (nvme0n2) 00:20:09.254 Could not set queue depth (nvme0n3) 00:20:09.254 Could not set queue depth (nvme0n4) 00:20:09.512 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.512 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.512 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.512 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:09.512 fio-3.35 00:20:09.512 Starting 4 threads 00:20:10.891 00:20:10.891 job0: (groupid=0, jobs=1): err= 0: pid=2220072: Fri Jul 26 22:06:21 2024 00:20:10.891 read: IOPS=4133, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1001msec) 00:20:10.891 slat (nsec): min=4776, max=32784, avg=9607.18, stdev=2312.81 00:20:10.891 clat (usec): min=65, max=176, avg=103.97, stdev=17.00 00:20:10.891 lat (usec): min=74, max=185, avg=113.58, stdev=17.35 00:20:10.891 clat percentiles (usec): 00:20:10.891 | 1.00th=[ 70], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 82], 00:20:10.891 | 30.00th=[ 103], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 113], 00:20:10.891 | 70.00th=[ 115], 80.00th=[ 118], 90.00th=[ 121], 95.00th=[ 124], 00:20:10.891 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 151], 99.95th=[ 155], 00:20:10.891 | 99.99th=[ 178] 00:20:10.891 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:20:10.891 slat (nsec): min=4397, max=50132, avg=11553.26, stdev=2743.22 00:20:10.891 clat (usec): min=60, max=303, avg=98.91, stdev=17.83 00:20:10.891 lat (usec): min=67, max=313, avg=110.46, stdev=18.55 00:20:10.891 clat percentiles (usec): 00:20:10.891 | 1.00th=[ 66], 5.00th=[ 70], 10.00th=[ 72], 20.00th=[ 76], 00:20:10.891 | 30.00th=[ 93], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 109], 00:20:10.891 | 70.00th=[ 111], 80.00th=[ 114], 90.00th=[ 117], 95.00th=[ 121], 00:20:10.891 | 99.00th=[ 127], 99.50th=[ 133], 99.90th=[ 151], 99.95th=[ 155], 00:20:10.891 | 99.99th=[ 306] 00:20:10.891 bw ( KiB/s): min=20480, max=20480, per=29.89%, avg=20480.00, stdev= 0.00, samples=1 00:20:10.891 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:10.891 lat (usec) : 100=31.87%, 250=68.12%, 500=0.01% 00:20:10.891 cpu : usr=5.50%, sys=12.00%, ctx=8748, majf=0, minf=2 00:20:10.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.891 issued rwts: total=4138,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.891 job1: (groupid=0, jobs=1): err= 0: pid=2220081: Fri Jul 26 22:06:21 2024 00:20:10.891 read: IOPS=3926, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1001msec) 00:20:10.891 slat (nsec): min=8172, max=31714, avg=9040.81, stdev=857.07 00:20:10.891 clat (usec): min=72, max=178, avg=115.77, stdev= 8.52 00:20:10.891 lat (usec): min=81, max=187, avg=124.81, stdev= 8.52 00:20:10.891 clat percentiles (usec): 00:20:10.891 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 108], 20.00th=[ 111], 00:20:10.891 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 118], 00:20:10.891 | 70.00th=[ 120], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 129], 00:20:10.891 | 99.00th=[ 141], 99.50th=[ 153], 99.90th=[ 172], 99.95th=[ 178], 00:20:10.891 | 99.99th=[ 180] 00:20:10.891 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:10.891 slat (nsec): min=9195, max=64719, avg=10552.59, stdev=1236.90 00:20:10.891 clat (usec): min=61, max=171, avg=109.68, stdev= 9.01 00:20:10.891 lat (usec): min=72, max=182, avg=120.23, stdev= 9.03 00:20:10.891 clat percentiles (usec): 00:20:10.891 | 1.00th=[ 82], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:20:10.891 | 30.00th=[ 106], 40.00th=[ 109], 50.00th=[ 110], 60.00th=[ 112], 00:20:10.891 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 123], 00:20:10.891 | 99.00th=[ 137], 99.50th=[ 145], 99.90th=[ 163], 99.95th=[ 167], 00:20:10.891 | 99.99th=[ 172] 00:20:10.891 bw ( KiB/s): min=16384, max=16384, per=23.91%, avg=16384.00, stdev= 0.00, samples=1 00:20:10.891 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:10.891 lat (usec) : 100=5.47%, 250=94.53% 00:20:10.891 cpu : usr=6.10%, sys=10.40%, ctx=8027, majf=0, minf=1 00:20:10.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.891 issued rwts: total=3930,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.891 job2: (groupid=0, jobs=1): err= 0: pid=2220093: Fri Jul 26 22:06:21 2024 00:20:10.891 read: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(15.3MiB/1001msec) 00:20:10.891 slat (nsec): min=8403, max=23773, avg=9199.69, stdev=766.39 00:20:10.891 clat (usec): min=77, max=167, avg=115.63, stdev= 7.41 00:20:10.891 lat (usec): min=86, max=176, avg=124.83, stdev= 7.41 00:20:10.891 clat percentiles (usec): 00:20:10.891 | 1.00th=[ 98], 5.00th=[ 105], 10.00th=[ 108], 20.00th=[ 111], 00:20:10.891 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 118], 00:20:10.891 | 70.00th=[ 119], 80.00th=[ 121], 90.00th=[ 125], 95.00th=[ 128], 00:20:10.891 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 157], 99.95th=[ 165], 00:20:10.891 | 99.99th=[ 167] 00:20:10.891 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:10.891 slat (nsec): min=10120, max=38334, avg=10825.54, stdev=1058.16 00:20:10.892 clat (usec): min=69, max=259, avg=109.47, stdev= 8.03 00:20:10.892 lat (usec): min=79, max=270, avg=120.30, stdev= 8.10 00:20:10.892 clat percentiles (usec): 00:20:10.892 | 1.00th=[ 91], 5.00th=[ 99], 10.00th=[ 101], 20.00th=[ 104], 00:20:10.892 | 30.00th=[ 106], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:20:10.892 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 119], 95.00th=[ 122], 00:20:10.892 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 147], 99.95th=[ 151], 00:20:10.892 | 99.99th=[ 260] 00:20:10.892 bw ( KiB/s): min=16384, max=16384, per=23.91%, avg=16384.00, stdev= 0.00, samples=1 00:20:10.892 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:10.892 lat (usec) : 100=4.33%, 250=95.66%, 500=0.01% 00:20:10.892 cpu : usr=5.10%, sys=11.50%, ctx=8023, majf=0, minf=1 00:20:10.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.892 issued rwts: total=3927,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.892 job3: (groupid=0, jobs=1): err= 0: pid=2220100: Fri Jul 26 22:06:21 2024 00:20:10.892 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:10.892 slat (nsec): min=8271, max=49288, avg=9588.15, stdev=2122.01 00:20:10.892 clat (usec): min=74, max=145, avg=106.51, stdev=12.49 00:20:10.892 lat (usec): min=83, max=155, avg=116.10, stdev=12.57 00:20:10.892 clat percentiles (usec): 00:20:10.892 | 1.00th=[ 79], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 93], 00:20:10.892 | 30.00th=[ 104], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:20:10.892 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 120], 95.00th=[ 123], 00:20:10.892 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 143], 99.95th=[ 143], 00:20:10.892 | 99.99th=[ 147] 00:20:10.892 write: IOPS=4344, BW=17.0MiB/s (17.8MB/s)(17.0MiB/1001msec); 0 zone resets 00:20:10.892 slat (nsec): min=10153, max=36204, avg=11553.46, stdev=2641.76 00:20:10.892 clat (usec): min=70, max=153, avg=104.79, stdev=13.54 00:20:10.892 lat (usec): min=82, max=164, avg=116.34, stdev=13.49 00:20:10.892 clat percentiles (usec): 00:20:10.892 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 90], 00:20:10.892 | 30.00th=[ 101], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 112], 00:20:10.892 | 70.00th=[ 114], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 123], 00:20:10.892 | 99.00th=[ 130], 99.50th=[ 135], 99.90th=[ 149], 99.95th=[ 151], 00:20:10.892 | 99.99th=[ 153] 00:20:10.892 bw ( KiB/s): min=19184, max=19184, per=27.99%, avg=19184.00, stdev= 0.00, samples=1 00:20:10.892 iops : min= 4796, max= 4796, avg=4796.00, stdev= 0.00, samples=1 00:20:10.892 lat (usec) : 100=27.03%, 250=72.97% 00:20:10.892 cpu : usr=4.90%, sys=12.20%, ctx=8445, majf=0, minf=1 00:20:10.892 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:10.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:10.892 issued rwts: total=4096,4349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:10.892 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:10.892 00:20:10.892 Run status group 0 (all jobs): 00:20:10.892 READ: bw=62.8MiB/s (65.8MB/s), 15.3MiB/s-16.1MiB/s (16.1MB/s-16.9MB/s), io=62.9MiB (65.9MB), run=1001-1001msec 00:20:10.892 WRITE: bw=66.9MiB/s (70.2MB/s), 16.0MiB/s-18.0MiB/s (16.8MB/s-18.9MB/s), io=67.0MiB (70.2MB), run=1001-1001msec 00:20:10.892 00:20:10.892 Disk stats (read/write): 00:20:10.892 nvme0n1: ios=3633/3838, merge=0/0, ticks=342/356, in_queue=698, util=84.37% 00:20:10.892 nvme0n2: ios=3103/3584, merge=0/0, ticks=321/370, in_queue=691, util=85.20% 00:20:10.892 nvme0n3: ios=3101/3584, merge=0/0, ticks=336/366, in_queue=702, util=88.45% 00:20:10.892 nvme0n4: ios=3537/3584, merge=0/0, ticks=361/341, in_queue=702, util=89.59% 00:20:10.892 22:06:21 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:10.892 [global] 00:20:10.892 thread=1 00:20:10.892 invalidate=1 00:20:10.892 rw=write 00:20:10.892 time_based=1 00:20:10.892 runtime=1 00:20:10.892 ioengine=libaio 00:20:10.892 direct=1 00:20:10.892 bs=4096 00:20:10.892 iodepth=128 00:20:10.892 norandommap=0 00:20:10.892 numjobs=1 00:20:10.892 00:20:10.892 verify_dump=1 00:20:10.892 verify_backlog=512 00:20:10.892 verify_state_save=0 00:20:10.892 do_verify=1 00:20:10.892 verify=crc32c-intel 00:20:10.892 [job0] 00:20:10.892 filename=/dev/nvme0n1 00:20:10.892 [job1] 00:20:10.892 filename=/dev/nvme0n2 00:20:10.892 [job2] 00:20:10.892 filename=/dev/nvme0n3 00:20:10.892 [job3] 00:20:10.892 filename=/dev/nvme0n4 00:20:10.892 Could not set queue depth (nvme0n1) 00:20:10.892 Could not set queue depth (nvme0n2) 00:20:10.892 Could not set queue depth (nvme0n3) 00:20:10.892 Could not set queue depth (nvme0n4) 00:20:11.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.149 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.149 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.149 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:11.149 fio-3.35 00:20:11.149 Starting 4 threads 00:20:12.541 00:20:12.541 job0: (groupid=0, jobs=1): err= 0: pid=2220507: Fri Jul 26 22:06:23 2024 00:20:12.541 read: IOPS=9206, BW=36.0MiB/s (37.7MB/s)(36.0MiB/1001msec) 00:20:12.541 slat (usec): min=2, max=788, avg=54.44, stdev=171.03 00:20:12.541 clat (usec): min=4321, max=11070, avg=7038.74, stdev=2538.33 00:20:12.541 lat (usec): min=4859, max=11073, avg=7093.17, stdev=2554.29 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5080], 20.00th=[ 5211], 00:20:12.541 | 30.00th=[ 5276], 40.00th=[ 5276], 50.00th=[ 5342], 60.00th=[ 5407], 00:20:12.541 | 70.00th=[10159], 80.00th=[10683], 90.00th=[10814], 95.00th=[10814], 00:20:12.541 | 99.00th=[10945], 99.50th=[10945], 99.90th=[11076], 99.95th=[11076], 00:20:12.541 | 99.99th=[11076] 00:20:12.541 write: IOPS=9330, BW=36.4MiB/s (38.2MB/s)(36.5MiB/1001msec); 0 zone resets 00:20:12.541 slat (usec): min=2, max=1236, avg=50.94, stdev=158.96 00:20:12.541 clat (usec): min=436, max=10711, avg=6611.67, stdev=2408.65 00:20:12.541 lat (usec): min=920, max=10715, avg=6662.60, stdev=2423.00 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[ 4146], 5.00th=[ 4752], 10.00th=[ 4817], 20.00th=[ 4883], 00:20:12.541 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:20:12.541 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10290], 00:20:12.541 | 99.00th=[10552], 99.50th=[10552], 99.90th=[10552], 99.95th=[10683], 00:20:12.541 | 99.99th=[10683] 00:20:12.541 bw ( KiB/s): min=24576, max=24576, per=26.12%, avg=24576.00, stdev= 0.00, samples=1 00:20:12.541 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:20:12.541 lat (usec) : 500=0.01%, 1000=0.09% 00:20:12.541 lat (msec) : 2=0.08%, 4=0.25%, 10=74.94%, 20=24.63% 00:20:12.541 cpu : usr=3.00%, sys=5.30%, ctx=1634, majf=0, minf=1 00:20:12.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:12.541 issued rwts: total=9216,9340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:12.541 job1: (groupid=0, jobs=1): err= 0: pid=2220515: Fri Jul 26 22:06:23 2024 00:20:12.541 read: IOPS=4613, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec) 00:20:12.541 slat (usec): min=2, max=3904, avg=102.82, stdev=432.77 00:20:12.541 clat (usec): min=2155, max=18952, avg=13204.36, stdev=3660.84 00:20:12.541 lat (usec): min=5812, max=18956, avg=13307.19, stdev=3662.09 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10290], 20.00th=[10683], 00:20:12.541 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:20:12.541 | 70.00th=[17957], 80.00th=[18482], 90.00th=[18482], 95.00th=[18744], 00:20:12.541 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:20:12.541 | 99.99th=[19006] 00:20:12.541 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:20:12.541 slat (usec): min=2, max=3871, avg=99.92, stdev=433.22 00:20:12.541 clat (usec): min=6501, max=19442, avg=12852.90, stdev=3601.33 00:20:12.541 lat (usec): min=6504, max=19446, avg=12952.82, stdev=3603.82 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[ 9896], 00:20:12.541 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10683], 00:20:12.541 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:20:12.541 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:20:12.541 | 99.99th=[19530] 00:20:12.541 bw ( KiB/s): min=15520, max=24576, per=21.31%, avg=20048.00, stdev=6403.56, samples=2 00:20:12.541 iops : min= 3880, max= 6144, avg=5012.00, stdev=1600.89, samples=2 00:20:12.541 lat (msec) : 4=0.01%, 10=15.80%, 20=84.19% 00:20:12.541 cpu : usr=2.10%, sys=2.99%, ctx=1795, majf=0, minf=1 00:20:12.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:20:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:12.541 issued rwts: total=4627,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:12.541 job2: (groupid=0, jobs=1): err= 0: pid=2220544: Fri Jul 26 22:06:23 2024 00:20:12.541 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:12.541 slat (usec): min=2, max=3795, avg=116.63, stdev=444.57 00:20:12.541 clat (usec): min=10241, max=18970, avg=15050.47, stdev=2627.68 00:20:12.541 lat (usec): min=11988, max=18974, avg=15167.10, stdev=2610.85 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[11469], 5.00th=[12387], 10.00th=[12649], 20.00th=[12911], 00:20:12.541 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[15139], 00:20:12.541 | 70.00th=[17957], 80.00th=[18482], 90.00th=[18482], 95.00th=[18744], 00:20:12.541 | 99.00th=[19006], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:20:12.541 | 99.99th=[19006] 00:20:12.541 write: IOPS=4542, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1003msec); 0 zone resets 00:20:12.541 slat (usec): min=2, max=3736, avg=112.14, stdev=438.85 00:20:12.541 clat (usec): min=1754, max=18565, avg=14294.05, stdev=2730.69 00:20:12.541 lat (usec): min=4276, max=18570, avg=14406.18, stdev=2713.18 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[ 9241], 5.00th=[11469], 10.00th=[12125], 20.00th=[12125], 00:20:12.541 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[16581], 00:20:12.541 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:20:12.541 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:20:12.541 | 99.99th=[18482] 00:20:12.541 bw ( KiB/s): min=14944, max=20480, per=18.83%, avg=17712.00, stdev=3914.54, samples=2 00:20:12.541 iops : min= 3736, max= 5120, avg=4428.00, stdev=978.64, samples=2 00:20:12.541 lat (msec) : 2=0.01%, 10=0.81%, 20=99.18% 00:20:12.541 cpu : usr=1.40%, sys=3.29%, ctx=1124, majf=0, minf=1 00:20:12.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:12.541 issued rwts: total=4096,4556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:12.541 job3: (groupid=0, jobs=1): err= 0: pid=2220553: Fri Jul 26 22:06:23 2024 00:20:12.541 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:20:12.541 slat (usec): min=2, max=4053, avg=116.64, stdev=506.58 00:20:12.541 clat (usec): min=10117, max=18932, avg=15041.02, stdev=2634.09 00:20:12.541 lat (usec): min=11998, max=18939, avg=15157.65, stdev=2605.62 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[11338], 5.00th=[12387], 10.00th=[12649], 20.00th=[12911], 00:20:12.541 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13173], 60.00th=[14353], 00:20:12.541 | 70.00th=[18220], 80.00th=[18482], 90.00th=[18482], 95.00th=[18744], 00:20:12.541 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19006], 99.95th=[19006], 00:20:12.541 | 99.99th=[19006] 00:20:12.541 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec); 0 zone resets 00:20:12.541 slat (usec): min=2, max=3867, avg=111.05, stdev=484.70 00:20:12.541 clat (usec): min=320, max=18458, avg=14255.13, stdev=2792.93 00:20:12.541 lat (usec): min=2936, max=18467, avg=14366.18, stdev=2768.92 00:20:12.541 clat percentiles (usec): 00:20:12.541 | 1.00th=[ 7832], 5.00th=[11469], 10.00th=[11994], 20.00th=[12125], 00:20:12.541 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[16450], 00:20:12.541 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17695], 95.00th=[17695], 00:20:12.541 | 99.00th=[18220], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:20:12.541 | 99.99th=[18482] 00:20:12.541 bw ( KiB/s): min=15104, max=20480, per=18.91%, avg=17792.00, stdev=3801.41, samples=2 00:20:12.541 iops : min= 3776, max= 5120, avg=4448.00, stdev=950.35, samples=2 00:20:12.541 lat (usec) : 500=0.01% 00:20:12.541 lat (msec) : 4=0.37%, 10=0.44%, 20=99.18% 00:20:12.541 cpu : usr=2.20%, sys=3.69%, ctx=724, majf=0, minf=1 00:20:12.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:12.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:12.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:12.541 issued rwts: total=4096,4576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:12.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:12.541 00:20:12.541 Run status group 0 (all jobs): 00:20:12.541 READ: bw=85.8MiB/s (90.0MB/s), 16.0MiB/s-36.0MiB/s (16.7MB/s-37.7MB/s), io=86.1MiB (90.3MB), run=1001-1003msec 00:20:12.541 WRITE: bw=91.9MiB/s (96.3MB/s), 17.7MiB/s-36.4MiB/s (18.6MB/s-38.2MB/s), io=92.2MiB (96.6MB), run=1001-1003msec 00:20:12.541 00:20:12.541 Disk stats (read/write): 00:20:12.541 nvme0n1: ios=6705/6992, merge=0/0, ticks=12868/12437, in_queue=25305, util=81.44% 00:20:12.542 nvme0n2: ios=4096/4210, merge=0/0, ticks=12830/12553, in_queue=25383, util=82.72% 00:20:12.542 nvme0n3: ios=3584/3619, merge=0/0, ticks=13047/12287, in_queue=25334, util=87.49% 00:20:12.542 nvme0n4: ios=3584/3615, merge=0/0, ticks=12915/12216, in_queue=25131, util=89.04% 00:20:12.542 22:06:23 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:12.542 [global] 00:20:12.542 thread=1 00:20:12.542 invalidate=1 00:20:12.542 rw=randwrite 00:20:12.542 time_based=1 00:20:12.542 runtime=1 00:20:12.542 ioengine=libaio 00:20:12.542 direct=1 00:20:12.542 bs=4096 00:20:12.542 iodepth=128 00:20:12.542 norandommap=0 00:20:12.542 numjobs=1 00:20:12.542 00:20:12.542 verify_dump=1 00:20:12.542 verify_backlog=512 00:20:12.542 verify_state_save=0 00:20:12.542 do_verify=1 00:20:12.542 verify=crc32c-intel 00:20:12.542 [job0] 00:20:12.542 filename=/dev/nvme0n1 00:20:12.542 [job1] 00:20:12.542 filename=/dev/nvme0n2 00:20:12.542 [job2] 00:20:12.542 filename=/dev/nvme0n3 00:20:12.542 [job3] 00:20:12.542 filename=/dev/nvme0n4 00:20:12.542 Could not set queue depth (nvme0n1) 00:20:12.542 Could not set queue depth (nvme0n2) 00:20:12.542 Could not set queue depth (nvme0n3) 00:20:12.542 Could not set queue depth (nvme0n4) 00:20:12.798 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.798 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.798 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.798 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:12.798 fio-3.35 00:20:12.798 Starting 4 threads 00:20:14.175 00:20:14.175 job0: (groupid=0, jobs=1): err= 0: pid=2220948: Fri Jul 26 22:06:25 2024 00:20:14.175 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:20:14.175 slat (usec): min=2, max=4185, avg=87.72, stdev=393.45 00:20:14.175 clat (usec): min=5752, max=16909, avg=11332.69, stdev=3147.92 00:20:14.175 lat (usec): min=5754, max=18956, avg=11420.40, stdev=3177.81 00:20:14.175 clat percentiles (usec): 00:20:14.175 | 1.00th=[ 5866], 5.00th=[ 6063], 10.00th=[ 6259], 20.00th=[ 6980], 00:20:14.175 | 30.00th=[10552], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:20:14.175 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14615], 95.00th=[14746], 00:20:14.175 | 99.00th=[15926], 99.50th=[16450], 99.90th=[16712], 99.95th=[16909], 00:20:14.175 | 99.99th=[16909] 00:20:14.175 write: IOPS=5896, BW=23.0MiB/s (24.1MB/s)(23.1MiB/1001msec); 0 zone resets 00:20:14.175 slat (usec): min=2, max=3982, avg=82.59, stdev=371.48 00:20:14.175 clat (usec): min=650, max=16094, avg=10673.89, stdev=3169.16 00:20:14.175 lat (usec): min=1483, max=16104, avg=10756.49, stdev=3198.36 00:20:14.175 clat percentiles (usec): 00:20:14.175 | 1.00th=[ 3752], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6521], 00:20:14.175 | 30.00th=[ 7570], 40.00th=[11600], 50.00th=[11994], 60.00th=[12125], 00:20:14.175 | 70.00th=[12387], 80.00th=[12780], 90.00th=[14484], 95.00th=[14615], 00:20:14.175 | 99.00th=[14746], 99.50th=[15139], 99.90th=[15795], 99.95th=[15926], 00:20:14.175 | 99.99th=[16057] 00:20:14.175 bw ( KiB/s): min=20480, max=20480, per=18.74%, avg=20480.00, stdev= 0.00, samples=1 00:20:14.175 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:14.175 lat (usec) : 750=0.01% 00:20:14.175 lat (msec) : 2=0.28%, 4=0.33%, 10=29.84%, 20=69.54% 00:20:14.175 cpu : usr=1.70%, sys=4.80%, ctx=1278, majf=0, minf=1 00:20:14.175 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:20:14.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.176 issued rwts: total=5632,5902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.176 job1: (groupid=0, jobs=1): err= 0: pid=2220959: Fri Jul 26 22:06:25 2024 00:20:14.176 read: IOPS=7015, BW=27.4MiB/s (28.7MB/s)(27.5MiB/1003msec) 00:20:14.176 slat (usec): min=2, max=4271, avg=71.32, stdev=330.01 00:20:14.176 clat (usec): min=1866, max=17162, avg=9152.56, stdev=3221.45 00:20:14.176 lat (usec): min=2667, max=17173, avg=9223.88, stdev=3253.54 00:20:14.176 clat percentiles (usec): 00:20:14.176 | 1.00th=[ 5604], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6456], 00:20:14.176 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6652], 60.00th=[11863], 00:20:14.176 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:20:14.176 | 99.00th=[15664], 99.50th=[16450], 99.90th=[16712], 99.95th=[16909], 00:20:14.176 | 99.99th=[17171] 00:20:14.176 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:20:14.176 slat (usec): min=2, max=4042, avg=66.53, stdev=301.40 00:20:14.176 clat (usec): min=5260, max=16248, avg=8720.51, stdev=2958.10 00:20:14.176 lat (usec): min=5329, max=16251, avg=8787.04, stdev=2987.42 00:20:14.176 clat percentiles (usec): 00:20:14.176 | 1.00th=[ 5473], 5.00th=[ 6063], 10.00th=[ 6128], 20.00th=[ 6194], 00:20:14.176 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6325], 60.00th=[11338], 00:20:14.176 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[12649], 00:20:14.176 | 99.00th=[14484], 99.50th=[15139], 99.90th=[15795], 99.95th=[16188], 00:20:14.176 | 99.99th=[16188] 00:20:14.176 bw ( KiB/s): min=20480, max=36864, per=26.23%, avg=28672.00, stdev=11585.24, samples=2 00:20:14.176 iops : min= 5120, max= 9216, avg=7168.00, stdev=2896.31, samples=2 00:20:14.176 lat (msec) : 2=0.01%, 4=0.22%, 10=57.65%, 20=42.13% 00:20:14.176 cpu : usr=2.40%, sys=5.09%, ctx=986, majf=0, minf=1 00:20:14.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:14.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.176 issued rwts: total=7037,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.176 job2: (groupid=0, jobs=1): err= 0: pid=2220979: Fri Jul 26 22:06:25 2024 00:20:14.176 read: IOPS=7152, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec) 00:20:14.176 slat (usec): min=2, max=2515, avg=65.05, stdev=232.35 00:20:14.176 clat (usec): min=1989, max=15912, avg=8540.75, stdev=1951.97 00:20:14.176 lat (usec): min=3044, max=15925, avg=8605.80, stdev=1972.09 00:20:14.176 clat percentiles (usec): 00:20:14.176 | 1.00th=[ 7308], 5.00th=[ 7570], 10.00th=[ 7635], 20.00th=[ 7767], 00:20:14.176 | 30.00th=[ 7832], 40.00th=[ 7898], 50.00th=[ 7963], 60.00th=[ 8029], 00:20:14.176 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[15401], 00:20:14.176 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:20:14.176 | 99.99th=[15926] 00:20:14.176 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:20:14.176 slat (usec): min=2, max=2556, avg=65.52, stdev=233.53 00:20:14.176 clat (usec): min=3654, max=16068, avg=8556.96, stdev=2422.51 00:20:14.176 lat (usec): min=3657, max=17225, avg=8622.49, stdev=2442.31 00:20:14.176 clat percentiles (usec): 00:20:14.176 | 1.00th=[ 7177], 5.00th=[ 7308], 10.00th=[ 7373], 20.00th=[ 7439], 00:20:14.176 | 30.00th=[ 7504], 40.00th=[ 7570], 50.00th=[ 7635], 60.00th=[ 7701], 00:20:14.176 | 70.00th=[ 7832], 80.00th=[ 8291], 90.00th=[14746], 95.00th=[15139], 00:20:14.176 | 99.00th=[15401], 99.50th=[15401], 99.90th=[16057], 99.95th=[16057], 00:20:14.176 | 99.99th=[16057] 00:20:14.176 bw ( KiB/s): min=27704, max=32768, per=27.66%, avg=30236.00, stdev=3580.79, samples=2 00:20:14.176 iops : min= 6926, max= 8192, avg=7559.00, stdev=895.20, samples=2 00:20:14.176 lat (msec) : 2=0.01%, 4=0.11%, 10=89.75%, 20=10.13% 00:20:14.176 cpu : usr=3.19%, sys=6.69%, ctx=1089, majf=0, minf=1 00:20:14.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:14.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.176 issued rwts: total=7174,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.176 job3: (groupid=0, jobs=1): err= 0: pid=2220985: Fri Jul 26 22:06:25 2024 00:20:14.176 read: IOPS=6361, BW=24.9MiB/s (26.1MB/s)(24.9MiB/1003msec) 00:20:14.176 slat (usec): min=2, max=2469, avg=77.48, stdev=290.96 00:20:14.176 clat (usec): min=2032, max=17265, avg=9880.91, stdev=3023.64 00:20:14.176 lat (usec): min=2991, max=17268, avg=9958.39, stdev=3038.20 00:20:14.176 clat percentiles (usec): 00:20:14.176 | 1.00th=[ 7046], 5.00th=[ 7570], 10.00th=[ 7767], 20.00th=[ 7832], 00:20:14.176 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:20:14.176 | 70.00th=[ 8979], 80.00th=[14484], 90.00th=[15008], 95.00th=[15401], 00:20:14.176 | 99.00th=[15926], 99.50th=[15926], 99.90th=[16057], 99.95th=[16057], 00:20:14.176 | 99.99th=[17171] 00:20:14.176 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:20:14.176 slat (usec): min=2, max=2439, avg=73.05, stdev=274.84 00:20:14.176 clat (usec): min=6257, max=15634, avg=9606.16, stdev=3034.29 00:20:14.176 lat (usec): min=6402, max=15637, avg=9679.21, stdev=3051.64 00:20:14.176 clat percentiles (usec): 00:20:14.176 | 1.00th=[ 6849], 5.00th=[ 7242], 10.00th=[ 7373], 20.00th=[ 7504], 00:20:14.176 | 30.00th=[ 7570], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8225], 00:20:14.176 | 70.00th=[ 8717], 80.00th=[14091], 90.00th=[14746], 95.00th=[15008], 00:20:14.176 | 99.00th=[15533], 99.50th=[15533], 99.90th=[15664], 99.95th=[15664], 00:20:14.176 | 99.99th=[15664] 00:20:14.176 bw ( KiB/s): min=20480, max=32768, per=24.36%, avg=26624.00, stdev=8688.93, samples=2 00:20:14.176 iops : min= 5120, max= 8192, avg=6656.00, stdev=2172.23, samples=2 00:20:14.176 lat (msec) : 4=0.12%, 10=72.28%, 20=27.60% 00:20:14.176 cpu : usr=2.79%, sys=4.39%, ctx=1272, majf=0, minf=1 00:20:14.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:20:14.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:14.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:14.176 issued rwts: total=6381,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:14.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:14.176 00:20:14.176 Run status group 0 (all jobs): 00:20:14.176 READ: bw=102MiB/s (107MB/s), 22.0MiB/s-27.9MiB/s (23.0MB/s-29.3MB/s), io=102MiB (107MB), run=1001-1003msec 00:20:14.176 WRITE: bw=107MiB/s (112MB/s), 23.0MiB/s-29.9MiB/s (24.1MB/s-31.4MB/s), io=107MiB (112MB), run=1001-1003msec 00:20:14.176 00:20:14.176 Disk stats (read/write): 00:20:14.176 nvme0n1: ios=4145/4139, merge=0/0, ticks=22461/21702, in_queue=44163, util=84.47% 00:20:14.176 nvme0n2: ios=5181/5632, merge=0/0, ticks=25739/25844, in_queue=51583, util=85.19% 00:20:14.176 nvme0n3: ios=6656/6757, merge=0/0, ticks=13415/12642, in_queue=26057, util=88.44% 00:20:14.176 nvme0n4: ios=5632/5964, merge=0/0, ticks=12994/13186, in_queue=26180, util=89.48% 00:20:14.176 22:06:25 -- target/fio.sh@55 -- # sync 00:20:14.176 22:06:25 -- target/fio.sh@59 -- # fio_pid=2221196 00:20:14.176 22:06:25 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:14.176 22:06:25 -- target/fio.sh@61 -- # sleep 3 00:20:14.176 [global] 00:20:14.176 thread=1 00:20:14.176 invalidate=1 00:20:14.176 rw=read 00:20:14.176 time_based=1 00:20:14.176 runtime=10 00:20:14.176 ioengine=libaio 00:20:14.176 direct=1 00:20:14.176 bs=4096 00:20:14.176 iodepth=1 00:20:14.176 norandommap=1 00:20:14.176 numjobs=1 00:20:14.176 00:20:14.176 [job0] 00:20:14.176 filename=/dev/nvme0n1 00:20:14.176 [job1] 00:20:14.176 filename=/dev/nvme0n2 00:20:14.176 [job2] 00:20:14.176 filename=/dev/nvme0n3 00:20:14.176 [job3] 00:20:14.176 filename=/dev/nvme0n4 00:20:14.176 Could not set queue depth (nvme0n1) 00:20:14.176 Could not set queue depth (nvme0n2) 00:20:14.176 Could not set queue depth (nvme0n3) 00:20:14.176 Could not set queue depth (nvme0n4) 00:20:14.433 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.433 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.433 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.433 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:14.433 fio-3.35 00:20:14.433 Starting 4 threads 00:20:16.950 22:06:28 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:17.206 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=96018432, buflen=4096 00:20:17.207 fio: pid=2221422, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:17.207 22:06:28 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:17.207 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=77357056, buflen=4096 00:20:17.207 fio: pid=2221413, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:17.207 22:06:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.207 22:06:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:17.463 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22659072, buflen=4096 00:20:17.463 fio: pid=2221370, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:17.463 22:06:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.463 22:06:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:17.720 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=26935296, buflen=4096 00:20:17.720 fio: pid=2221385, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:17.720 22:06:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.720 22:06:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:17.720 00:20:17.720 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2221370: Fri Jul 26 22:06:28 2024 00:20:17.720 read: IOPS=7426, BW=29.0MiB/s (30.4MB/s)(85.6MiB/2951msec) 00:20:17.720 slat (usec): min=6, max=16505, avg=11.83, stdev=205.24 00:20:17.720 clat (usec): min=51, max=341, avg=120.48, stdev=29.59 00:20:17.720 lat (usec): min=60, max=16619, avg=132.32, stdev=207.23 00:20:17.720 clat percentiles (usec): 00:20:17.720 | 1.00th=[ 61], 5.00th=[ 73], 10.00th=[ 76], 20.00th=[ 82], 00:20:17.720 | 30.00th=[ 112], 40.00th=[ 120], 50.00th=[ 127], 60.00th=[ 137], 00:20:17.720 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:20:17.720 | 99.00th=[ 176], 99.50th=[ 184], 99.90th=[ 202], 99.95th=[ 210], 00:20:17.720 | 99.99th=[ 262] 00:20:17.720 bw ( KiB/s): min=26000, max=38248, per=26.31%, avg=29281.60, stdev=5252.45, samples=5 00:20:17.720 iops : min= 6500, max= 9562, avg=7320.40, stdev=1313.11, samples=5 00:20:17.720 lat (usec) : 100=26.49%, 250=73.50%, 500=0.01% 00:20:17.720 cpu : usr=2.85%, sys=8.20%, ctx=21923, majf=0, minf=1 00:20:17.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.720 issued rwts: total=21917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:17.720 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2221385: Fri Jul 26 22:06:28 2024 00:20:17.720 read: IOPS=7326, BW=28.6MiB/s (30.0MB/s)(89.7MiB/3134msec) 00:20:17.720 slat (usec): min=7, max=20922, avg=13.95, stdev=264.60 00:20:17.720 clat (usec): min=43, max=8815, avg=120.78, stdev=66.46 00:20:17.720 lat (usec): min=57, max=21008, avg=134.73, stdev=272.38 00:20:17.720 clat percentiles (usec): 00:20:17.720 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 79], 00:20:17.720 | 30.00th=[ 115], 40.00th=[ 121], 50.00th=[ 130], 60.00th=[ 139], 00:20:17.720 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:20:17.720 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 206], 99.95th=[ 208], 00:20:17.720 | 99.99th=[ 219] 00:20:17.720 bw ( KiB/s): min=25632, max=36696, per=25.74%, avg=28645.33, stdev=4264.21, samples=6 00:20:17.720 iops : min= 6408, max= 9174, avg=7161.33, stdev=1066.05, samples=6 00:20:17.720 lat (usec) : 50=0.02%, 100=24.08%, 250=75.89% 00:20:17.720 lat (msec) : 10=0.01% 00:20:17.720 cpu : usr=2.81%, sys=11.52%, ctx=22969, majf=0, minf=1 00:20:17.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.721 issued rwts: total=22961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:17.721 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2221413: Fri Jul 26 22:06:28 2024 00:20:17.721 read: IOPS=6808, BW=26.6MiB/s (27.9MB/s)(73.8MiB/2774msec) 00:20:17.721 slat (usec): min=8, max=15785, avg=11.12, stdev=139.44 00:20:17.721 clat (usec): min=58, max=226, avg=133.26, stdev=20.24 00:20:17.721 lat (usec): min=67, max=15898, avg=144.38, stdev=140.70 00:20:17.721 clat percentiles (usec): 00:20:17.721 | 1.00th=[ 79], 5.00th=[ 92], 10.00th=[ 111], 20.00th=[ 118], 00:20:17.721 | 30.00th=[ 123], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:20:17.721 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:20:17.721 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 204], 00:20:17.721 | 99.99th=[ 223] 00:20:17.721 bw ( KiB/s): min=25976, max=29624, per=24.41%, avg=27169.60, stdev=1597.93, samples=5 00:20:17.721 iops : min= 6494, max= 7406, avg=6792.40, stdev=399.48, samples=5 00:20:17.721 lat (usec) : 100=6.35%, 250=93.64% 00:20:17.721 cpu : usr=3.32%, sys=9.84%, ctx=18891, majf=0, minf=1 00:20:17.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.721 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.721 issued rwts: total=18887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:17.721 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2221422: Fri Jul 26 22:06:28 2024 00:20:17.721 read: IOPS=9082, BW=35.5MiB/s (37.2MB/s)(91.6MiB/2581msec) 00:20:17.721 slat (nsec): min=8179, max=43330, avg=9011.37, stdev=1005.06 00:20:17.721 clat (usec): min=66, max=213, avg=99.64, stdev=23.50 00:20:17.721 lat (usec): min=79, max=222, avg=108.65, stdev=23.61 00:20:17.721 clat percentiles (usec): 00:20:17.721 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:20:17.721 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 91], 00:20:17.721 | 70.00th=[ 116], 80.00th=[ 122], 90.00th=[ 139], 95.00th=[ 147], 00:20:17.721 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 186], 99.95th=[ 196], 00:20:17.721 | 99.99th=[ 208] 00:20:17.721 bw ( KiB/s): min=28000, max=42664, per=32.65%, avg=36344.00, stdev=6084.31, samples=5 00:20:17.721 iops : min= 7000, max=10666, avg=9086.00, stdev=1521.08, samples=5 00:20:17.721 lat (usec) : 100=65.23%, 250=34.77% 00:20:17.721 cpu : usr=3.18%, sys=13.80%, ctx=23445, majf=0, minf=2 00:20:17.721 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:17.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.721 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:17.721 issued rwts: total=23443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:17.721 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:17.721 00:20:17.721 Run status group 0 (all jobs): 00:20:17.721 READ: bw=109MiB/s (114MB/s), 26.6MiB/s-35.5MiB/s (27.9MB/s-37.2MB/s), io=341MiB (357MB), run=2581-3134msec 00:20:17.721 00:20:17.721 Disk stats (read/write): 00:20:17.721 nvme0n1: ios=20476/0, merge=0/0, ticks=2435/0, in_queue=2435, util=91.82% 00:20:17.721 nvme0n2: ios=21952/0, merge=0/0, ticks=2514/0, in_queue=2514, util=91.83% 00:20:17.721 nvme0n3: ios=17321/0, merge=0/0, ticks=2181/0, in_queue=2181, util=95.80% 00:20:17.721 nvme0n4: ios=23274/0, merge=0/0, ticks=2156/0, in_queue=2156, util=96.42% 00:20:17.977 22:06:28 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.977 22:06:28 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:17.977 22:06:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:17.977 22:06:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:18.234 22:06:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:18.234 22:06:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:18.490 22:06:29 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:18.490 22:06:29 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:18.747 22:06:29 -- target/fio.sh@69 -- # fio_status=0 00:20:18.747 22:06:29 -- target/fio.sh@70 -- # wait 2221196 00:20:18.747 22:06:29 -- target/fio.sh@70 -- # fio_status=4 00:20:18.747 22:06:29 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:19.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:19.708 22:06:30 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:19.708 22:06:30 -- common/autotest_common.sh@1198 -- # local i=0 00:20:19.708 22:06:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:19.708 22:06:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:19.708 22:06:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:19.708 22:06:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:19.708 22:06:30 -- common/autotest_common.sh@1210 -- # return 0 00:20:19.708 22:06:30 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:19.708 22:06:30 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:19.708 nvmf hotplug test: fio failed as expected 00:20:19.708 22:06:30 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.708 22:06:30 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:19.708 22:06:30 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:19.708 22:06:30 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:19.708 22:06:30 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:19.708 22:06:30 -- target/fio.sh@91 -- # nvmftestfini 00:20:19.708 22:06:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:19.708 22:06:30 -- nvmf/common.sh@116 -- # sync 00:20:19.708 22:06:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:19.708 22:06:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:19.708 22:06:30 -- nvmf/common.sh@119 -- # set +e 00:20:19.708 22:06:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:19.708 22:06:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:19.708 rmmod nvme_rdma 00:20:19.965 rmmod nvme_fabrics 00:20:19.965 22:06:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:19.965 22:06:30 -- nvmf/common.sh@123 -- # set -e 00:20:19.965 22:06:30 -- nvmf/common.sh@124 -- # return 0 00:20:19.965 22:06:30 -- nvmf/common.sh@477 -- # '[' -n 2218323 ']' 00:20:19.965 22:06:30 -- nvmf/common.sh@478 -- # killprocess 2218323 00:20:19.965 22:06:30 -- common/autotest_common.sh@926 -- # '[' -z 2218323 ']' 00:20:19.965 22:06:30 -- common/autotest_common.sh@930 -- # kill -0 2218323 00:20:19.965 22:06:30 -- common/autotest_common.sh@931 -- # uname 00:20:19.965 22:06:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.965 22:06:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2218323 00:20:19.965 22:06:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:19.965 22:06:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:19.965 22:06:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2218323' 00:20:19.965 killing process with pid 2218323 00:20:19.965 22:06:31 -- common/autotest_common.sh@945 -- # kill 2218323 00:20:19.965 22:06:31 -- common/autotest_common.sh@950 -- # wait 2218323 00:20:20.222 22:06:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:20.222 22:06:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:20.222 00:20:20.222 real 0m27.835s 00:20:20.222 user 2m8.054s 00:20:20.222 sys 0m11.334s 00:20:20.222 22:06:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:20.222 22:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.222 ************************************ 00:20:20.222 END TEST nvmf_fio_target 00:20:20.222 ************************************ 00:20:20.222 22:06:31 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:20.222 22:06:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:20.222 22:06:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:20.222 22:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:20.222 ************************************ 00:20:20.222 START TEST nvmf_bdevio 00:20:20.222 ************************************ 00:20:20.222 22:06:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:20.222 * Looking for test storage... 00:20:20.222 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:20.222 22:06:31 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:20.222 22:06:31 -- nvmf/common.sh@7 -- # uname -s 00:20:20.222 22:06:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.222 22:06:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.222 22:06:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.222 22:06:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.222 22:06:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.222 22:06:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.222 22:06:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.222 22:06:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.222 22:06:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.222 22:06:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.222 22:06:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:20.222 22:06:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:20.222 22:06:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.222 22:06:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.222 22:06:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:20.222 22:06:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:20.480 22:06:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.480 22:06:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.480 22:06:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.480 22:06:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.480 22:06:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.480 22:06:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.480 22:06:31 -- paths/export.sh@5 -- # export PATH 00:20:20.480 22:06:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.480 22:06:31 -- nvmf/common.sh@46 -- # : 0 00:20:20.480 22:06:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:20.480 22:06:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:20.480 22:06:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:20.480 22:06:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.480 22:06:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.480 22:06:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:20.480 22:06:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:20.480 22:06:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:20.480 22:06:31 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:20.480 22:06:31 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:20.480 22:06:31 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:20.480 22:06:31 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:20.480 22:06:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:20.480 22:06:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:20.480 22:06:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:20.480 22:06:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:20.480 22:06:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.480 22:06:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.480 22:06:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.480 22:06:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:20.480 22:06:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:20.480 22:06:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:20.480 22:06:31 -- common/autotest_common.sh@10 -- # set +x 00:20:28.583 22:06:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:28.583 22:06:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:28.583 22:06:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:28.583 22:06:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:28.583 22:06:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:28.583 22:06:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:28.583 22:06:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:28.583 22:06:39 -- nvmf/common.sh@294 -- # net_devs=() 00:20:28.583 22:06:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:28.583 22:06:39 -- nvmf/common.sh@295 -- # e810=() 00:20:28.583 22:06:39 -- nvmf/common.sh@295 -- # local -ga e810 00:20:28.583 22:06:39 -- nvmf/common.sh@296 -- # x722=() 00:20:28.583 22:06:39 -- nvmf/common.sh@296 -- # local -ga x722 00:20:28.583 22:06:39 -- nvmf/common.sh@297 -- # mlx=() 00:20:28.583 22:06:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:28.583 22:06:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.583 22:06:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:28.583 22:06:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:28.583 22:06:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:28.583 22:06:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:28.583 22:06:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:28.583 22:06:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:28.583 22:06:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:28.583 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:28.583 22:06:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:28.583 22:06:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:28.583 22:06:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:28.583 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:28.583 22:06:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:28.583 22:06:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:28.583 22:06:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:28.583 22:06:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.583 22:06:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:28.583 22:06:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.583 22:06:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:28.583 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:28.583 22:06:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.583 22:06:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:28.583 22:06:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.583 22:06:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:28.583 22:06:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.583 22:06:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:28.583 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:28.583 22:06:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.583 22:06:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:28.583 22:06:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:28.583 22:06:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:28.583 22:06:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:28.583 22:06:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:28.583 22:06:39 -- nvmf/common.sh@57 -- # uname 00:20:28.583 22:06:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:28.583 22:06:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:28.583 22:06:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:28.583 22:06:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:28.583 22:06:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:28.583 22:06:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:28.583 22:06:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:28.583 22:06:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:28.583 22:06:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:28.584 22:06:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:28.584 22:06:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:28.584 22:06:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:28.584 22:06:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:28.584 22:06:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:28.584 22:06:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:28.584 22:06:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:28.584 22:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@104 -- # continue 2 00:20:28.584 22:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@104 -- # continue 2 00:20:28.584 22:06:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:28.584 22:06:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:28.584 22:06:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:28.584 22:06:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:28.584 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:28.584 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:28.584 altname enp217s0f0np0 00:20:28.584 altname ens818f0np0 00:20:28.584 inet 192.168.100.8/24 scope global mlx_0_0 00:20:28.584 valid_lft forever preferred_lft forever 00:20:28.584 22:06:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:28.584 22:06:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:28.584 22:06:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:28.584 22:06:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:28.584 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:28.584 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:28.584 altname enp217s0f1np1 00:20:28.584 altname ens818f1np1 00:20:28.584 inet 192.168.100.9/24 scope global mlx_0_1 00:20:28.584 valid_lft forever preferred_lft forever 00:20:28.584 22:06:39 -- nvmf/common.sh@410 -- # return 0 00:20:28.584 22:06:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:28.584 22:06:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:28.584 22:06:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:28.584 22:06:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:28.584 22:06:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:28.584 22:06:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:28.584 22:06:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:28.584 22:06:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:28.584 22:06:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:28.584 22:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@104 -- # continue 2 00:20:28.584 22:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:28.584 22:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:28.584 22:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@104 -- # continue 2 00:20:28.584 22:06:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:28.584 22:06:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:28.584 22:06:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:28.584 22:06:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:28.584 22:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:28.584 22:06:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:28.584 192.168.100.9' 00:20:28.584 22:06:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:28.584 192.168.100.9' 00:20:28.584 22:06:39 -- nvmf/common.sh@445 -- # head -n 1 00:20:28.584 22:06:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:28.584 22:06:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:28.584 192.168.100.9' 00:20:28.584 22:06:39 -- nvmf/common.sh@446 -- # head -n 1 00:20:28.584 22:06:39 -- nvmf/common.sh@446 -- # tail -n +2 00:20:28.584 22:06:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:28.584 22:06:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:28.584 22:06:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:28.584 22:06:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:28.584 22:06:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:28.584 22:06:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:28.584 22:06:39 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:28.584 22:06:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:28.584 22:06:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:28.584 22:06:39 -- common/autotest_common.sh@10 -- # set +x 00:20:28.584 22:06:39 -- nvmf/common.sh@469 -- # nvmfpid=2226383 00:20:28.584 22:06:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:28.584 22:06:39 -- nvmf/common.sh@470 -- # waitforlisten 2226383 00:20:28.584 22:06:39 -- common/autotest_common.sh@819 -- # '[' -z 2226383 ']' 00:20:28.584 22:06:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.584 22:06:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:28.584 22:06:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.584 22:06:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:28.584 22:06:39 -- common/autotest_common.sh@10 -- # set +x 00:20:28.584 [2024-07-26 22:06:39.749869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:28.584 [2024-07-26 22:06:39.749925] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.584 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.842 [2024-07-26 22:06:39.836722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.842 [2024-07-26 22:06:39.874703] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:28.842 [2024-07-26 22:06:39.874812] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.842 [2024-07-26 22:06:39.874822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.842 [2024-07-26 22:06:39.874831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.842 [2024-07-26 22:06:39.874955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:28.842 [2024-07-26 22:06:39.875065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:28.842 [2024-07-26 22:06:39.875152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:28.842 [2024-07-26 22:06:39.875151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.406 22:06:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:29.406 22:06:40 -- common/autotest_common.sh@852 -- # return 0 00:20:29.406 22:06:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:29.406 22:06:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:29.406 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:20:29.406 22:06:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.406 22:06:40 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:29.406 22:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.406 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:20:29.406 [2024-07-26 22:06:40.628931] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x183dd90/0x1842280) succeed. 00:20:29.662 [2024-07-26 22:06:40.639179] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x183f380/0x1883910) succeed. 00:20:29.662 22:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.662 22:06:40 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:29.662 22:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.662 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:20:29.662 Malloc0 00:20:29.662 22:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.662 22:06:40 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.662 22:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.662 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:20:29.662 22:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.662 22:06:40 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.662 22:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.662 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:20:29.662 22:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.662 22:06:40 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:29.662 22:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:29.662 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:20:29.662 [2024-07-26 22:06:40.804081] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:29.662 22:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:29.662 22:06:40 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:29.662 22:06:40 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:29.662 22:06:40 -- nvmf/common.sh@520 -- # config=() 00:20:29.662 22:06:40 -- nvmf/common.sh@520 -- # local subsystem config 00:20:29.662 22:06:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:29.662 22:06:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:29.662 { 00:20:29.662 "params": { 00:20:29.662 "name": "Nvme$subsystem", 00:20:29.662 "trtype": "$TEST_TRANSPORT", 00:20:29.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.662 "adrfam": "ipv4", 00:20:29.662 "trsvcid": "$NVMF_PORT", 00:20:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.662 "hdgst": ${hdgst:-false}, 00:20:29.662 "ddgst": ${ddgst:-false} 00:20:29.662 }, 00:20:29.662 "method": "bdev_nvme_attach_controller" 00:20:29.662 } 00:20:29.662 EOF 00:20:29.662 )") 00:20:29.662 22:06:40 -- nvmf/common.sh@542 -- # cat 00:20:29.662 22:06:40 -- nvmf/common.sh@544 -- # jq . 00:20:29.662 22:06:40 -- nvmf/common.sh@545 -- # IFS=, 00:20:29.662 22:06:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:29.662 "params": { 00:20:29.662 "name": "Nvme1", 00:20:29.662 "trtype": "rdma", 00:20:29.662 "traddr": "192.168.100.8", 00:20:29.662 "adrfam": "ipv4", 00:20:29.662 "trsvcid": "4420", 00:20:29.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.662 "hdgst": false, 00:20:29.662 "ddgst": false 00:20:29.662 }, 00:20:29.662 "method": "bdev_nvme_attach_controller" 00:20:29.662 }' 00:20:29.662 [2024-07-26 22:06:40.854781] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:29.662 [2024-07-26 22:06:40.854836] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226673 ] 00:20:29.918 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.918 [2024-07-26 22:06:40.939814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:29.918 [2024-07-26 22:06:40.978250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.918 [2024-07-26 22:06:40.978346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.918 [2024-07-26 22:06:40.978349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.175 [2024-07-26 22:06:41.149551] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:30.175 [2024-07-26 22:06:41.149582] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:30.175 I/O targets: 00:20:30.175 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:30.175 00:20:30.175 00:20:30.175 CUnit - A unit testing framework for C - Version 2.1-3 00:20:30.175 http://cunit.sourceforge.net/ 00:20:30.175 00:20:30.175 00:20:30.175 Suite: bdevio tests on: Nvme1n1 00:20:30.175 Test: blockdev write read block ...passed 00:20:30.175 Test: blockdev write zeroes read block ...passed 00:20:30.175 Test: blockdev write zeroes read no split ...passed 00:20:30.175 Test: blockdev write zeroes read split ...passed 00:20:30.175 Test: blockdev write zeroes read split partial ...passed 00:20:30.175 Test: blockdev reset ...[2024-07-26 22:06:41.179462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:30.175 [2024-07-26 22:06:41.202276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:30.175 [2024-07-26 22:06:41.228854] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:30.175 passed 00:20:30.175 Test: blockdev write read 8 blocks ...passed 00:20:30.175 Test: blockdev write read size > 128k ...passed 00:20:30.175 Test: blockdev write read invalid size ...passed 00:20:30.175 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:30.175 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:30.175 Test: blockdev write read max offset ...passed 00:20:30.175 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:30.175 Test: blockdev writev readv 8 blocks ...passed 00:20:30.175 Test: blockdev writev readv 30 x 1block ...passed 00:20:30.175 Test: blockdev writev readv block ...passed 00:20:30.175 Test: blockdev writev readv size > 128k ...passed 00:20:30.175 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:30.175 Test: blockdev comparev and writev ...[2024-07-26 22:06:41.231714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.231741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.231757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.231767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.231935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.231946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.231957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.231966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.232168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.232188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.232352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:30.175 [2024-07-26 22:06:41.232372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:30.175 passed 00:20:30.175 Test: blockdev nvme passthru rw ...passed 00:20:30.175 Test: blockdev nvme passthru vendor specific ...[2024-07-26 22:06:41.232635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:30.175 [2024-07-26 22:06:41.232647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:30.175 [2024-07-26 22:06:41.232700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:30.175 [2024-07-26 22:06:41.232757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:30.175 [2024-07-26 22:06:41.232798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:30.175 [2024-07-26 22:06:41.232808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:30.175 passed 00:20:30.175 Test: blockdev nvme admin passthru ...passed 00:20:30.175 Test: blockdev copy ...passed 00:20:30.175 00:20:30.175 Run Summary: Type Total Ran Passed Failed Inactive 00:20:30.175 suites 1 1 n/a 0 0 00:20:30.175 tests 23 23 23 0 0 00:20:30.175 asserts 152 152 152 0 n/a 00:20:30.175 00:20:30.175 Elapsed time = 0.171 seconds 00:20:30.432 22:06:41 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.432 22:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:30.432 22:06:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.432 22:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:30.432 22:06:41 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:30.432 22:06:41 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:30.432 22:06:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:30.432 22:06:41 -- nvmf/common.sh@116 -- # sync 00:20:30.432 22:06:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:30.432 22:06:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:30.432 22:06:41 -- nvmf/common.sh@119 -- # set +e 00:20:30.432 22:06:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:30.432 22:06:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:30.432 rmmod nvme_rdma 00:20:30.432 rmmod nvme_fabrics 00:20:30.432 22:06:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:30.432 22:06:41 -- nvmf/common.sh@123 -- # set -e 00:20:30.432 22:06:41 -- nvmf/common.sh@124 -- # return 0 00:20:30.432 22:06:41 -- nvmf/common.sh@477 -- # '[' -n 2226383 ']' 00:20:30.432 22:06:41 -- nvmf/common.sh@478 -- # killprocess 2226383 00:20:30.432 22:06:41 -- common/autotest_common.sh@926 -- # '[' -z 2226383 ']' 00:20:30.432 22:06:41 -- common/autotest_common.sh@930 -- # kill -0 2226383 00:20:30.432 22:06:41 -- common/autotest_common.sh@931 -- # uname 00:20:30.432 22:06:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:30.432 22:06:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2226383 00:20:30.432 22:06:41 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:20:30.432 22:06:41 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:20:30.432 22:06:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2226383' 00:20:30.432 killing process with pid 2226383 00:20:30.432 22:06:41 -- common/autotest_common.sh@945 -- # kill 2226383 00:20:30.432 22:06:41 -- common/autotest_common.sh@950 -- # wait 2226383 00:20:30.690 22:06:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:30.690 22:06:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:30.690 00:20:30.690 real 0m10.463s 00:20:30.690 user 0m10.868s 00:20:30.690 sys 0m6.949s 00:20:30.690 22:06:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.690 22:06:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.690 ************************************ 00:20:30.690 END TEST nvmf_bdevio 00:20:30.690 ************************************ 00:20:30.690 22:06:41 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:30.690 22:06:41 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:30.690 22:06:41 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:30.690 22:06:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:30.690 22:06:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:30.690 22:06:41 -- common/autotest_common.sh@10 -- # set +x 00:20:30.690 ************************************ 00:20:30.690 START TEST nvmf_fuzz 00:20:30.690 ************************************ 00:20:30.690 22:06:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:30.947 * Looking for test storage... 00:20:30.947 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:30.947 22:06:41 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.947 22:06:41 -- nvmf/common.sh@7 -- # uname -s 00:20:30.947 22:06:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.947 22:06:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.947 22:06:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.947 22:06:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.947 22:06:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.947 22:06:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.947 22:06:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.947 22:06:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.947 22:06:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.947 22:06:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.947 22:06:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:30.947 22:06:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:30.947 22:06:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.947 22:06:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.947 22:06:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.947 22:06:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:30.947 22:06:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.947 22:06:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.947 22:06:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.947 22:06:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.947 22:06:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.947 22:06:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.947 22:06:41 -- paths/export.sh@5 -- # export PATH 00:20:30.947 22:06:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.947 22:06:41 -- nvmf/common.sh@46 -- # : 0 00:20:30.947 22:06:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:30.947 22:06:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:30.947 22:06:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:30.947 22:06:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.947 22:06:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.947 22:06:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:30.947 22:06:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:30.947 22:06:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:30.947 22:06:41 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:30.947 22:06:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:30.947 22:06:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.947 22:06:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:30.947 22:06:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:30.947 22:06:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:30.947 22:06:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.947 22:06:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.947 22:06:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.947 22:06:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:30.947 22:06:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:30.947 22:06:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:30.947 22:06:41 -- common/autotest_common.sh@10 -- # set +x 00:20:39.044 22:06:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:39.044 22:06:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:39.044 22:06:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:39.044 22:06:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:39.044 22:06:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:39.044 22:06:49 -- nvmf/common.sh@294 -- # net_devs=() 00:20:39.044 22:06:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@295 -- # e810=() 00:20:39.044 22:06:49 -- nvmf/common.sh@295 -- # local -ga e810 00:20:39.044 22:06:49 -- nvmf/common.sh@296 -- # x722=() 00:20:39.044 22:06:49 -- nvmf/common.sh@296 -- # local -ga x722 00:20:39.044 22:06:49 -- nvmf/common.sh@297 -- # mlx=() 00:20:39.044 22:06:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:39.044 22:06:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.044 22:06:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:39.044 22:06:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:39.044 22:06:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:39.044 22:06:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:39.044 22:06:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:39.044 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:39.044 22:06:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:39.044 22:06:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:39.044 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:39.044 22:06:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:39.044 22:06:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.044 22:06:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.044 22:06:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:39.044 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.044 22:06:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.044 22:06:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.044 22:06:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:39.044 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.044 22:06:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:39.044 22:06:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:39.044 22:06:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:39.044 22:06:49 -- nvmf/common.sh@57 -- # uname 00:20:39.044 22:06:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:39.044 22:06:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:39.044 22:06:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:39.044 22:06:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:39.044 22:06:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:39.044 22:06:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:39.044 22:06:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:39.044 22:06:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:39.044 22:06:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:39.044 22:06:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:39.044 22:06:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:39.044 22:06:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:39.044 22:06:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:39.044 22:06:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@104 -- # continue 2 00:20:39.044 22:06:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@104 -- # continue 2 00:20:39.044 22:06:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:39.044 22:06:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.044 22:06:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:39.044 22:06:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:39.044 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:39.044 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:39.044 altname enp217s0f0np0 00:20:39.044 altname ens818f0np0 00:20:39.044 inet 192.168.100.8/24 scope global mlx_0_0 00:20:39.044 valid_lft forever preferred_lft forever 00:20:39.044 22:06:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:39.044 22:06:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.044 22:06:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:39.044 22:06:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:39.044 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:39.044 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:39.044 altname enp217s0f1np1 00:20:39.044 altname ens818f1np1 00:20:39.044 inet 192.168.100.9/24 scope global mlx_0_1 00:20:39.044 valid_lft forever preferred_lft forever 00:20:39.044 22:06:49 -- nvmf/common.sh@410 -- # return 0 00:20:39.044 22:06:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:39.044 22:06:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:39.044 22:06:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:39.044 22:06:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:39.044 22:06:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:39.044 22:06:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:39.044 22:06:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:39.044 22:06:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:39.044 22:06:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@104 -- # continue 2 00:20:39.044 22:06:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:39.044 22:06:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:39.044 22:06:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@104 -- # continue 2 00:20:39.044 22:06:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:39.044 22:06:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.044 22:06:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:39.044 22:06:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:39.044 22:06:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:39.044 22:06:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:39.044 192.168.100.9' 00:20:39.044 22:06:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:39.044 192.168.100.9' 00:20:39.044 22:06:49 -- nvmf/common.sh@445 -- # head -n 1 00:20:39.044 22:06:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:39.044 22:06:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:39.044 192.168.100.9' 00:20:39.044 22:06:49 -- nvmf/common.sh@446 -- # tail -n +2 00:20:39.044 22:06:49 -- nvmf/common.sh@446 -- # head -n 1 00:20:39.044 22:06:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:39.044 22:06:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:39.044 22:06:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:39.044 22:06:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:39.044 22:06:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:39.044 22:06:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:39.044 22:06:49 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2230855 00:20:39.044 22:06:49 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:39.044 22:06:49 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:39.044 22:06:49 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2230855 00:20:39.044 22:06:49 -- common/autotest_common.sh@819 -- # '[' -z 2230855 ']' 00:20:39.044 22:06:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.044 22:06:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.044 22:06:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.044 22:06:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.044 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:20:39.608 22:06:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:39.608 22:06:50 -- common/autotest_common.sh@852 -- # return 0 00:20:39.609 22:06:50 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:39.609 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.609 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:20:39.865 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.865 22:06:50 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:39.865 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.865 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:20:39.865 Malloc0 00:20:39.865 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.865 22:06:50 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.865 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.865 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:20:39.865 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.865 22:06:50 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.865 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.865 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:20:39.865 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.865 22:06:50 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:39.865 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:39.865 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:20:39.865 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:39.865 22:06:50 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:39.865 22:06:50 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:11.970 Fuzzing completed. Shutting down the fuzz application 00:21:11.970 00:21:11.970 Dumping successful admin opcodes: 00:21:11.970 8, 9, 10, 24, 00:21:11.970 Dumping successful io opcodes: 00:21:11.970 0, 9, 00:21:11.970 NS: 0x200003af1f00 I/O qp, Total commands completed: 1013801, total successful commands: 5939, random_seed: 2271076992 00:21:11.970 NS: 0x200003af1f00 admin qp, Total commands completed: 128480, total successful commands: 1045, random_seed: 2904234560 00:21:11.970 22:07:21 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:11.970 Fuzzing completed. Shutting down the fuzz application 00:21:11.970 00:21:11.970 Dumping successful admin opcodes: 00:21:11.970 24, 00:21:11.970 Dumping successful io opcodes: 00:21:11.970 00:21:11.970 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 908957738 00:21:11.970 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 909033508 00:21:11.970 22:07:22 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.970 22:07:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:11.970 22:07:22 -- common/autotest_common.sh@10 -- # set +x 00:21:11.970 22:07:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:11.970 22:07:22 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:11.970 22:07:22 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:11.970 22:07:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:11.970 22:07:22 -- nvmf/common.sh@116 -- # sync 00:21:11.970 22:07:22 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:11.970 22:07:22 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:11.970 22:07:22 -- nvmf/common.sh@119 -- # set +e 00:21:11.970 22:07:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:11.970 22:07:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:11.970 rmmod nvme_rdma 00:21:11.970 rmmod nvme_fabrics 00:21:11.970 22:07:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:11.970 22:07:22 -- nvmf/common.sh@123 -- # set -e 00:21:11.970 22:07:22 -- nvmf/common.sh@124 -- # return 0 00:21:11.970 22:07:22 -- nvmf/common.sh@477 -- # '[' -n 2230855 ']' 00:21:11.970 22:07:22 -- nvmf/common.sh@478 -- # killprocess 2230855 00:21:11.970 22:07:22 -- common/autotest_common.sh@926 -- # '[' -z 2230855 ']' 00:21:11.970 22:07:22 -- common/autotest_common.sh@930 -- # kill -0 2230855 00:21:11.970 22:07:22 -- common/autotest_common.sh@931 -- # uname 00:21:11.970 22:07:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:11.970 22:07:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2230855 00:21:11.970 22:07:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:11.970 22:07:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:11.970 22:07:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2230855' 00:21:11.970 killing process with pid 2230855 00:21:11.970 22:07:22 -- common/autotest_common.sh@945 -- # kill 2230855 00:21:11.970 22:07:22 -- common/autotest_common.sh@950 -- # wait 2230855 00:21:11.970 22:07:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:11.970 22:07:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:11.970 22:07:22 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:11.970 00:21:11.970 real 0m41.134s 00:21:11.970 user 0m50.281s 00:21:11.970 sys 0m22.765s 00:21:11.970 22:07:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:11.970 22:07:22 -- common/autotest_common.sh@10 -- # set +x 00:21:11.970 ************************************ 00:21:11.970 END TEST nvmf_fuzz 00:21:11.970 ************************************ 00:21:11.970 22:07:23 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:11.970 22:07:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:11.970 22:07:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:11.970 22:07:23 -- common/autotest_common.sh@10 -- # set +x 00:21:11.970 ************************************ 00:21:11.970 START TEST nvmf_multiconnection 00:21:11.971 ************************************ 00:21:11.971 22:07:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:11.971 * Looking for test storage... 00:21:11.971 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:11.971 22:07:23 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.971 22:07:23 -- nvmf/common.sh@7 -- # uname -s 00:21:11.971 22:07:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.971 22:07:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.971 22:07:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.971 22:07:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.971 22:07:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.971 22:07:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.971 22:07:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.971 22:07:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.971 22:07:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.971 22:07:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.971 22:07:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:11.971 22:07:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:11.971 22:07:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.971 22:07:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.971 22:07:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.971 22:07:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:11.971 22:07:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.971 22:07:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.971 22:07:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.971 22:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.971 22:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.971 22:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.971 22:07:23 -- paths/export.sh@5 -- # export PATH 00:21:11.971 22:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.971 22:07:23 -- nvmf/common.sh@46 -- # : 0 00:21:11.971 22:07:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:11.971 22:07:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:11.971 22:07:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:11.971 22:07:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.971 22:07:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.971 22:07:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:11.971 22:07:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:11.971 22:07:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:11.971 22:07:23 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:11.971 22:07:23 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:11.971 22:07:23 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:11.971 22:07:23 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:11.971 22:07:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:11.971 22:07:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.971 22:07:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:11.971 22:07:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:11.971 22:07:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:11.971 22:07:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.971 22:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.971 22:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.971 22:07:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:11.971 22:07:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:11.971 22:07:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:11.971 22:07:23 -- common/autotest_common.sh@10 -- # set +x 00:21:20.087 22:07:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:20.087 22:07:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:20.087 22:07:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:20.087 22:07:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:20.087 22:07:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:20.087 22:07:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:20.087 22:07:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:20.087 22:07:30 -- nvmf/common.sh@294 -- # net_devs=() 00:21:20.087 22:07:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:20.087 22:07:30 -- nvmf/common.sh@295 -- # e810=() 00:21:20.087 22:07:30 -- nvmf/common.sh@295 -- # local -ga e810 00:21:20.087 22:07:30 -- nvmf/common.sh@296 -- # x722=() 00:21:20.087 22:07:30 -- nvmf/common.sh@296 -- # local -ga x722 00:21:20.087 22:07:30 -- nvmf/common.sh@297 -- # mlx=() 00:21:20.087 22:07:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:20.087 22:07:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.087 22:07:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.088 22:07:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:20.088 22:07:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:20.088 22:07:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:20.088 22:07:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:20.088 22:07:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:20.088 22:07:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.088 22:07:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:20.088 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:20.088 22:07:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.088 22:07:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.088 22:07:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:20.088 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:20.088 22:07:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:20.088 22:07:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:20.088 22:07:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.088 22:07:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.088 22:07:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.088 22:07:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.088 22:07:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:20.088 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:20.088 22:07:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.088 22:07:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.088 22:07:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.088 22:07:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.088 22:07:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.088 22:07:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:20.088 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:20.088 22:07:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.088 22:07:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:20.088 22:07:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:20.088 22:07:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:20.088 22:07:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:20.088 22:07:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:20.088 22:07:30 -- nvmf/common.sh@57 -- # uname 00:21:20.088 22:07:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:20.088 22:07:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:20.088 22:07:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:20.088 22:07:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:20.088 22:07:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:20.088 22:07:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:20.088 22:07:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:20.088 22:07:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:20.088 22:07:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:20.088 22:07:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:20.088 22:07:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:20.088 22:07:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.088 22:07:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:20.088 22:07:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:20.088 22:07:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.088 22:07:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:20.088 22:07:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@104 -- # continue 2 00:21:20.088 22:07:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@104 -- # continue 2 00:21:20.088 22:07:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:20.088 22:07:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.088 22:07:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:20.088 22:07:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:20.088 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.088 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:20.088 altname enp217s0f0np0 00:21:20.088 altname ens818f0np0 00:21:20.088 inet 192.168.100.8/24 scope global mlx_0_0 00:21:20.088 valid_lft forever preferred_lft forever 00:21:20.088 22:07:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:20.088 22:07:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.088 22:07:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:20.088 22:07:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:20.088 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:20.088 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:20.088 altname enp217s0f1np1 00:21:20.088 altname ens818f1np1 00:21:20.088 inet 192.168.100.9/24 scope global mlx_0_1 00:21:20.088 valid_lft forever preferred_lft forever 00:21:20.088 22:07:31 -- nvmf/common.sh@410 -- # return 0 00:21:20.088 22:07:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:20.088 22:07:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:20.088 22:07:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:20.088 22:07:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:20.088 22:07:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:20.088 22:07:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:20.088 22:07:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:20.088 22:07:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:20.088 22:07:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:20.088 22:07:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@104 -- # continue 2 00:21:20.088 22:07:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:20.088 22:07:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:20.088 22:07:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@104 -- # continue 2 00:21:20.088 22:07:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:20.088 22:07:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.088 22:07:31 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:20.088 22:07:31 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:20.088 22:07:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:20.088 22:07:31 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:20.088 192.168.100.9' 00:21:20.088 22:07:31 -- nvmf/common.sh@445 -- # head -n 1 00:21:20.088 22:07:31 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:20.088 192.168.100.9' 00:21:20.089 22:07:31 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:20.089 22:07:31 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:20.089 192.168.100.9' 00:21:20.089 22:07:31 -- nvmf/common.sh@446 -- # tail -n +2 00:21:20.089 22:07:31 -- nvmf/common.sh@446 -- # head -n 1 00:21:20.089 22:07:31 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:20.089 22:07:31 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:20.089 22:07:31 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:20.089 22:07:31 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:20.089 22:07:31 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:20.089 22:07:31 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:20.089 22:07:31 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:20.089 22:07:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:20.089 22:07:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:20.089 22:07:31 -- common/autotest_common.sh@10 -- # set +x 00:21:20.089 22:07:31 -- nvmf/common.sh@469 -- # nvmfpid=2240418 00:21:20.089 22:07:31 -- nvmf/common.sh@470 -- # waitforlisten 2240418 00:21:20.089 22:07:31 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:20.089 22:07:31 -- common/autotest_common.sh@819 -- # '[' -z 2240418 ']' 00:21:20.089 22:07:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.089 22:07:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:20.089 22:07:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.089 22:07:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:20.089 22:07:31 -- common/autotest_common.sh@10 -- # set +x 00:21:20.089 [2024-07-26 22:07:31.209725] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:20.089 [2024-07-26 22:07:31.209777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.089 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.089 [2024-07-26 22:07:31.295696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.348 [2024-07-26 22:07:31.336255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:20.348 [2024-07-26 22:07:31.336359] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.348 [2024-07-26 22:07:31.336369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.348 [2024-07-26 22:07:31.336379] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.348 [2024-07-26 22:07:31.336424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.348 [2024-07-26 22:07:31.336515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.348 [2024-07-26 22:07:31.336598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.348 [2024-07-26 22:07:31.336600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:20.914 22:07:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:20.914 22:07:32 -- common/autotest_common.sh@852 -- # return 0 00:21:20.914 22:07:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:20.914 22:07:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:20.914 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 22:07:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.914 22:07:32 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:20.914 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.914 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:20.914 [2024-07-26 22:07:32.084811] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x224a4b0/0x224e9a0) succeed. 00:21:20.914 [2024-07-26 22:07:32.094909] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x224baa0/0x2290030) succeed. 00:21:21.172 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:21.173 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.173 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 Malloc1 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 [2024-07-26 22:07:32.269003] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.173 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 Malloc2 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.173 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 Malloc3 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.173 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 Malloc4 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.173 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.173 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:21.173 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.173 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.431 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.431 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.431 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:21.431 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.431 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.431 Malloc5 00:21:21.431 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.431 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:21.431 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.431 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.431 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.431 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:21.431 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.431 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.431 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.431 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:21.431 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.431 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.431 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.431 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.431 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:21.431 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.431 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.431 Malloc6 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.432 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 Malloc7 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.432 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 Malloc8 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.432 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 Malloc9 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.432 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.432 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.432 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:21.432 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.432 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 Malloc10 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.690 22:07:32 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 Malloc11 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:21.690 22:07:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:21.690 22:07:32 -- common/autotest_common.sh@10 -- # set +x 00:21:21.690 22:07:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:21.690 22:07:32 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:21.690 22:07:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:21.690 22:07:32 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:22.622 22:07:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:22.622 22:07:33 -- common/autotest_common.sh@1177 -- # local i=0 00:21:22.622 22:07:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:22.622 22:07:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:22.622 22:07:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:24.518 22:07:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:24.518 22:07:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:24.518 22:07:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:24.518 22:07:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:24.518 22:07:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:24.518 22:07:35 -- common/autotest_common.sh@1187 -- # return 0 00:21:24.518 22:07:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:24.518 22:07:35 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:25.887 22:07:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:25.887 22:07:36 -- common/autotest_common.sh@1177 -- # local i=0 00:21:25.887 22:07:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:25.887 22:07:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:25.887 22:07:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:27.782 22:07:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:27.782 22:07:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:27.782 22:07:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:27.782 22:07:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:27.782 22:07:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:27.782 22:07:38 -- common/autotest_common.sh@1187 -- # return 0 00:21:27.782 22:07:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:27.782 22:07:38 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:28.713 22:07:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:28.713 22:07:39 -- common/autotest_common.sh@1177 -- # local i=0 00:21:28.713 22:07:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:28.713 22:07:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:28.713 22:07:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:30.608 22:07:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:30.608 22:07:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:30.608 22:07:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:30.608 22:07:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:30.608 22:07:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:30.608 22:07:41 -- common/autotest_common.sh@1187 -- # return 0 00:21:30.608 22:07:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.608 22:07:41 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:31.539 22:07:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:31.539 22:07:42 -- common/autotest_common.sh@1177 -- # local i=0 00:21:31.540 22:07:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:31.540 22:07:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:31.540 22:07:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:34.116 22:07:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:34.116 22:07:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:34.116 22:07:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:21:34.116 22:07:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:34.116 22:07:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.116 22:07:44 -- common/autotest_common.sh@1187 -- # return 0 00:21:34.116 22:07:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:34.116 22:07:44 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:34.681 22:07:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:34.682 22:07:45 -- common/autotest_common.sh@1177 -- # local i=0 00:21:34.682 22:07:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:34.682 22:07:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:34.682 22:07:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:36.579 22:07:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:36.579 22:07:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:36.579 22:07:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:21:36.579 22:07:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:36.579 22:07:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:36.579 22:07:47 -- common/autotest_common.sh@1187 -- # return 0 00:21:36.579 22:07:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:36.579 22:07:47 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:37.511 22:07:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:37.511 22:07:48 -- common/autotest_common.sh@1177 -- # local i=0 00:21:37.511 22:07:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.511 22:07:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:37.511 22:07:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:40.037 22:07:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:40.037 22:07:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:40.037 22:07:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:21:40.037 22:07:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:40.037 22:07:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:40.037 22:07:50 -- common/autotest_common.sh@1187 -- # return 0 00:21:40.037 22:07:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.037 22:07:50 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:40.602 22:07:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:40.602 22:07:51 -- common/autotest_common.sh@1177 -- # local i=0 00:21:40.602 22:07:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:40.602 22:07:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:40.602 22:07:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:43.126 22:07:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:43.126 22:07:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:43.126 22:07:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:21:43.126 22:07:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:43.126 22:07:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:43.126 22:07:53 -- common/autotest_common.sh@1187 -- # return 0 00:21:43.126 22:07:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:43.126 22:07:53 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:43.690 22:07:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:43.690 22:07:54 -- common/autotest_common.sh@1177 -- # local i=0 00:21:43.690 22:07:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:43.690 22:07:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:43.690 22:07:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:45.598 22:07:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:45.598 22:07:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:45.598 22:07:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:21:45.598 22:07:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:45.598 22:07:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.598 22:07:56 -- common/autotest_common.sh@1187 -- # return 0 00:21:45.598 22:07:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:45.598 22:07:56 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:46.532 22:07:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:46.532 22:07:57 -- common/autotest_common.sh@1177 -- # local i=0 00:21:46.532 22:07:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:46.532 22:07:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:46.532 22:07:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:49.054 22:07:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:49.054 22:07:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:49.054 22:07:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:21:49.054 22:07:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:49.054 22:07:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.054 22:07:59 -- common/autotest_common.sh@1187 -- # return 0 00:21:49.054 22:07:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.054 22:07:59 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:49.619 22:08:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:49.619 22:08:00 -- common/autotest_common.sh@1177 -- # local i=0 00:21:49.619 22:08:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:49.619 22:08:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:49.619 22:08:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:52.143 22:08:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:52.143 22:08:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:52.143 22:08:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:21:52.143 22:08:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:52.143 22:08:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.143 22:08:02 -- common/autotest_common.sh@1187 -- # return 0 00:21:52.143 22:08:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.143 22:08:02 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:21:52.709 22:08:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:52.709 22:08:03 -- common/autotest_common.sh@1177 -- # local i=0 00:21:52.709 22:08:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:52.709 22:08:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:52.709 22:08:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:54.642 22:08:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:54.642 22:08:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:54.642 22:08:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:21:54.642 22:08:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:54.642 22:08:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:54.642 22:08:05 -- common/autotest_common.sh@1187 -- # return 0 00:21:54.642 22:08:05 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:54.642 [global] 00:21:54.642 thread=1 00:21:54.642 invalidate=1 00:21:54.642 rw=read 00:21:54.642 time_based=1 00:21:54.642 runtime=10 00:21:54.642 ioengine=libaio 00:21:54.642 direct=1 00:21:54.642 bs=262144 00:21:54.642 iodepth=64 00:21:54.642 norandommap=1 00:21:54.642 numjobs=1 00:21:54.642 00:21:54.642 [job0] 00:21:54.642 filename=/dev/nvme0n1 00:21:54.642 [job1] 00:21:54.642 filename=/dev/nvme10n1 00:21:54.642 [job2] 00:21:54.642 filename=/dev/nvme1n1 00:21:54.642 [job3] 00:21:54.642 filename=/dev/nvme2n1 00:21:54.642 [job4] 00:21:54.642 filename=/dev/nvme3n1 00:21:54.642 [job5] 00:21:54.642 filename=/dev/nvme4n1 00:21:54.642 [job6] 00:21:54.642 filename=/dev/nvme5n1 00:21:54.642 [job7] 00:21:54.642 filename=/dev/nvme6n1 00:21:54.642 [job8] 00:21:54.642 filename=/dev/nvme7n1 00:21:54.642 [job9] 00:21:54.643 filename=/dev/nvme8n1 00:21:54.643 [job10] 00:21:54.643 filename=/dev/nvme9n1 00:21:54.900 Could not set queue depth (nvme0n1) 00:21:54.900 Could not set queue depth (nvme10n1) 00:21:54.900 Could not set queue depth (nvme1n1) 00:21:54.900 Could not set queue depth (nvme2n1) 00:21:54.900 Could not set queue depth (nvme3n1) 00:21:54.900 Could not set queue depth (nvme4n1) 00:21:54.900 Could not set queue depth (nvme5n1) 00:21:54.900 Could not set queue depth (nvme6n1) 00:21:54.900 Could not set queue depth (nvme7n1) 00:21:54.900 Could not set queue depth (nvme8n1) 00:21:54.900 Could not set queue depth (nvme9n1) 00:21:55.157 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:55.157 fio-3.35 00:21:55.157 Starting 11 threads 00:22:07.354 00:22:07.354 job0: (groupid=0, jobs=1): err= 0: pid=2246754: Fri Jul 26 22:08:16 2024 00:22:07.354 read: IOPS=1888, BW=472MiB/s (495MB/s)(4734MiB/10028msec) 00:22:07.354 slat (usec): min=12, max=15389, avg=520.62, stdev=1206.89 00:22:07.354 clat (usec): min=10071, max=85631, avg=33340.82, stdev=8069.11 00:22:07.354 lat (usec): min=10302, max=85677, avg=33861.44, stdev=8227.85 00:22:07.354 clat percentiles (usec): 00:22:07.354 | 1.00th=[14484], 5.00th=[16450], 10.00th=[28967], 20.00th=[29754], 00:22:07.354 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31327], 60.00th=[31851], 00:22:07.354 | 70.00th=[32637], 80.00th=[38011], 90.00th=[46400], 95.00th=[47973], 00:22:07.354 | 99.00th=[50594], 99.50th=[52167], 99.90th=[82314], 99.95th=[83362], 00:22:07.354 | 99.99th=[85459] 00:22:07.354 bw ( KiB/s): min=342016, max=724480, per=11.76%, avg=483123.20, stdev=98628.85, samples=20 00:22:07.354 iops : min= 1336, max= 2830, avg=1887.20, stdev=385.27, samples=20 00:22:07.354 lat (msec) : 20=5.90%, 50=92.71%, 100=1.39% 00:22:07.354 cpu : usr=0.60%, sys=6.87%, ctx=3808, majf=0, minf=3221 00:22:07.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:07.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.354 issued rwts: total=18935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.354 job1: (groupid=0, jobs=1): err= 0: pid=2246755: Fri Jul 26 22:08:16 2024 00:22:07.354 read: IOPS=1643, BW=411MiB/s (431MB/s)(4119MiB/10027msec) 00:22:07.354 slat (usec): min=11, max=30950, avg=594.75, stdev=1463.97 00:22:07.354 clat (msec): min=13, max=101, avg=38.31, stdev=14.58 00:22:07.354 lat (msec): min=13, max=101, avg=38.91, stdev=14.85 00:22:07.354 clat percentiles (usec): 00:22:07.354 | 1.00th=[23987], 5.00th=[28967], 10.00th=[29492], 20.00th=[30016], 00:22:07.354 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31589], 60.00th=[32113], 00:22:07.354 | 70.00th=[32900], 80.00th=[61080], 90.00th=[63701], 95.00th=[66323], 00:22:07.354 | 99.00th=[82314], 99.50th=[83362], 99.90th=[86508], 99.95th=[89654], 00:22:07.354 | 99.99th=[91751] 00:22:07.354 bw ( KiB/s): min=235520, max=525312, per=10.23%, avg=420172.80, stdev=127928.44, samples=20 00:22:07.354 iops : min= 920, max= 2052, avg=1641.30, stdev=499.72, samples=20 00:22:07.354 lat (msec) : 20=0.47%, 50=78.50%, 100=21.02%, 250=0.01% 00:22:07.354 cpu : usr=0.45%, sys=6.04%, ctx=3431, majf=0, minf=4097 00:22:07.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:07.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.354 issued rwts: total=16476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.354 job2: (groupid=0, jobs=1): err= 0: pid=2246756: Fri Jul 26 22:08:16 2024 00:22:07.354 read: IOPS=1382, BW=346MiB/s (362MB/s)(3469MiB/10039msec) 00:22:07.354 slat (usec): min=13, max=15432, avg=710.66, stdev=1689.58 00:22:07.354 clat (usec): min=11727, max=90233, avg=45555.27, stdev=5526.84 00:22:07.354 lat (usec): min=11970, max=90273, avg=46265.93, stdev=5757.94 00:22:07.354 clat percentiles (usec): 00:22:07.354 | 1.00th=[29230], 5.00th=[31327], 10.00th=[43254], 20.00th=[44827], 00:22:07.354 | 30.00th=[45351], 40.00th=[45351], 50.00th=[45876], 60.00th=[46400], 00:22:07.354 | 70.00th=[46924], 80.00th=[47973], 90.00th=[49546], 95.00th=[51119], 00:22:07.354 | 99.00th=[61604], 99.50th=[62653], 99.90th=[72877], 99.95th=[82314], 00:22:07.354 | 99.99th=[90702] 00:22:07.354 bw ( KiB/s): min=308736, max=462848, per=8.61%, avg=353561.60, stdev=32077.99, samples=20 00:22:07.354 iops : min= 1206, max= 1808, avg=1381.10, stdev=125.30, samples=20 00:22:07.354 lat (msec) : 20=0.23%, 50=91.87%, 100=7.90% 00:22:07.354 cpu : usr=0.36%, sys=5.82%, ctx=2773, majf=0, minf=4097 00:22:07.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:07.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.354 issued rwts: total=13874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.354 job3: (groupid=0, jobs=1): err= 0: pid=2246757: Fri Jul 26 22:08:16 2024 00:22:07.354 read: IOPS=1870, BW=468MiB/s (490MB/s)(4690MiB/10027msec) 00:22:07.354 slat (usec): min=12, max=13530, avg=529.82, stdev=1264.82 00:22:07.354 clat (usec): min=10561, max=65855, avg=33649.32, stdev=5932.54 00:22:07.354 lat (usec): min=10801, max=65915, avg=34179.14, stdev=6087.65 00:22:07.354 clat percentiles (usec): 00:22:07.354 | 1.00th=[28705], 5.00th=[29230], 10.00th=[29754], 20.00th=[30278], 00:22:07.354 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31589], 60.00th=[32113], 00:22:07.354 | 70.00th=[32637], 80.00th=[33817], 90.00th=[45876], 95.00th=[47449], 00:22:07.354 | 99.00th=[50594], 99.50th=[53216], 99.90th=[56886], 99.95th=[57934], 00:22:07.354 | 99.99th=[59507] 00:22:07.354 bw ( KiB/s): min=335872, max=518656, per=11.65%, avg=478592.00, stdev=64141.77, samples=20 00:22:07.354 iops : min= 1312, max= 2026, avg=1869.50, stdev=250.55, samples=20 00:22:07.354 lat (msec) : 20=0.32%, 50=98.30%, 100=1.38% 00:22:07.354 cpu : usr=0.66%, sys=6.93%, ctx=3596, majf=0, minf=4097 00:22:07.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:07.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.354 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.354 issued rwts: total=18758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.354 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.354 job4: (groupid=0, jobs=1): err= 0: pid=2246758: Fri Jul 26 22:08:16 2024 00:22:07.354 read: IOPS=1050, BW=263MiB/s (275MB/s)(2640MiB/10051msec) 00:22:07.354 slat (usec): min=14, max=26022, avg=928.74, stdev=2594.01 00:22:07.354 clat (msec): min=13, max=103, avg=59.93, stdev= 9.48 00:22:07.354 lat (msec): min=13, max=103, avg=60.86, stdev= 9.89 00:22:07.354 clat percentiles (msec): 00:22:07.355 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 47], 20.00th=[ 48], 00:22:07.355 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:22:07.355 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 75], 00:22:07.355 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 102], 99.95th=[ 102], 00:22:07.355 | 99.99th=[ 104] 00:22:07.355 bw ( KiB/s): min=226304, max=347136, per=6.54%, avg=268697.20, stdev=36117.29, samples=20 00:22:07.355 iops : min= 884, max= 1356, avg=1049.55, stdev=141.11, samples=20 00:22:07.355 lat (msec) : 20=0.25%, 50=23.92%, 100=75.72%, 250=0.11% 00:22:07.355 cpu : usr=0.47%, sys=4.83%, ctx=2188, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=10558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 job5: (groupid=0, jobs=1): err= 0: pid=2246759: Fri Jul 26 22:08:16 2024 00:22:07.355 read: IOPS=1569, BW=392MiB/s (411MB/s)(3940MiB/10040msec) 00:22:07.355 slat (usec): min=12, max=16433, avg=630.83, stdev=1548.61 00:22:07.355 clat (usec): min=10499, max=86756, avg=40100.65, stdev=8572.65 00:22:07.355 lat (usec): min=10750, max=86806, avg=40731.47, stdev=8779.76 00:22:07.355 clat percentiles (usec): 00:22:07.355 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29754], 20.00th=[30802], 00:22:07.355 | 30.00th=[31589], 40.00th=[33424], 50.00th=[44827], 60.00th=[45351], 00:22:07.355 | 70.00th=[45876], 80.00th=[46924], 90.00th=[47973], 95.00th=[49546], 00:22:07.355 | 99.00th=[61604], 99.50th=[63177], 99.90th=[81265], 99.95th=[85459], 00:22:07.355 | 99.99th=[86508] 00:22:07.355 bw ( KiB/s): min=307712, max=528384, per=9.78%, avg=401817.60, stdev=82423.31, samples=20 00:22:07.355 iops : min= 1202, max= 2064, avg=1569.60, stdev=321.97, samples=20 00:22:07.355 lat (msec) : 20=0.37%, 50=95.06%, 100=4.57% 00:22:07.355 cpu : usr=0.51%, sys=6.51%, ctx=3053, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=15759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 job6: (groupid=0, jobs=1): err= 0: pid=2246760: Fri Jul 26 22:08:16 2024 00:22:07.355 read: IOPS=1570, BW=393MiB/s (412MB/s)(3941MiB/10041msec) 00:22:07.355 slat (usec): min=12, max=13668, avg=630.76, stdev=1518.57 00:22:07.355 clat (usec): min=10276, max=83525, avg=40088.76, stdev=8466.57 00:22:07.355 lat (usec): min=10616, max=83580, avg=40719.51, stdev=8674.24 00:22:07.355 clat percentiles (usec): 00:22:07.355 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29754], 20.00th=[31065], 00:22:07.355 | 30.00th=[31589], 40.00th=[33817], 50.00th=[44827], 60.00th=[45351], 00:22:07.355 | 70.00th=[45876], 80.00th=[46924], 90.00th=[47973], 95.00th=[49546], 00:22:07.355 | 99.00th=[61080], 99.50th=[62653], 99.90th=[72877], 99.95th=[76022], 00:22:07.355 | 99.99th=[83362] 00:22:07.355 bw ( KiB/s): min=306688, max=525312, per=9.78%, avg=401971.20, stdev=81808.75, samples=20 00:22:07.355 iops : min= 1198, max= 2052, avg=1570.20, stdev=319.57, samples=20 00:22:07.355 lat (msec) : 20=0.37%, 50=95.03%, 100=4.59% 00:22:07.355 cpu : usr=0.48%, sys=6.66%, ctx=3023, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=15765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 job7: (groupid=0, jobs=1): err= 0: pid=2246761: Fri Jul 26 22:08:16 2024 00:22:07.355 read: IOPS=1141, BW=285MiB/s (299MB/s)(2868MiB/10050msec) 00:22:07.355 slat (usec): min=15, max=27091, avg=861.22, stdev=2281.27 00:22:07.355 clat (msec): min=11, max=109, avg=55.15, stdev=10.53 00:22:07.355 lat (msec): min=11, max=114, avg=56.02, stdev=10.85 00:22:07.355 clat percentiles (msec): 00:22:07.355 | 1.00th=[ 42], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 47], 00:22:07.355 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 50], 60.00th=[ 62], 00:22:07.355 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 66], 95.00th=[ 74], 00:22:07.355 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 107], 99.95th=[ 108], 00:22:07.355 | 99.99th=[ 110] 00:22:07.355 bw ( KiB/s): min=199168, max=347648, per=7.11%, avg=292019.20, stdev=46236.11, samples=20 00:22:07.355 iops : min= 778, max= 1358, avg=1140.70, stdev=180.61, samples=20 00:22:07.355 lat (msec) : 20=0.33%, 50=50.44%, 100=48.99%, 250=0.24% 00:22:07.355 cpu : usr=0.41%, sys=5.44%, ctx=2282, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=11470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 job8: (groupid=0, jobs=1): err= 0: pid=2246762: Fri Jul 26 22:08:16 2024 00:22:07.355 read: IOPS=1870, BW=468MiB/s (490MB/s)(4690MiB/10028msec) 00:22:07.355 slat (usec): min=13, max=13197, avg=529.79, stdev=1237.17 00:22:07.355 clat (usec): min=10462, max=60149, avg=33646.19, stdev=5967.11 00:22:07.355 lat (usec): min=10695, max=63566, avg=34175.98, stdev=6118.24 00:22:07.355 clat percentiles (usec): 00:22:07.355 | 1.00th=[28705], 5.00th=[29230], 10.00th=[29754], 20.00th=[30278], 00:22:07.355 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31589], 60.00th=[32113], 00:22:07.355 | 70.00th=[32637], 80.00th=[33817], 90.00th=[46400], 95.00th=[47449], 00:22:07.355 | 99.00th=[51119], 99.50th=[53216], 99.90th=[56886], 99.95th=[58459], 00:22:07.355 | 99.99th=[60031] 00:22:07.355 bw ( KiB/s): min=337920, max=524800, per=11.65%, avg=478720.00, stdev=64877.11, samples=20 00:22:07.355 iops : min= 1320, max= 2050, avg=1870.00, stdev=253.43, samples=20 00:22:07.355 lat (msec) : 20=0.36%, 50=98.23%, 100=1.41% 00:22:07.355 cpu : usr=0.50%, sys=7.22%, ctx=3553, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=18761,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 job9: (groupid=0, jobs=1): err= 0: pid=2246763: Fri Jul 26 22:08:16 2024 00:22:07.355 read: IOPS=1041, BW=260MiB/s (273MB/s)(2618MiB/10051msec) 00:22:07.355 slat (usec): min=15, max=25339, avg=950.46, stdev=2403.05 00:22:07.355 clat (msec): min=12, max=112, avg=60.42, stdev= 9.05 00:22:07.355 lat (msec): min=13, max=112, avg=61.37, stdev= 9.41 00:22:07.355 clat percentiles (msec): 00:22:07.355 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 47], 20.00th=[ 50], 00:22:07.355 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 63], 60.00th=[ 64], 00:22:07.355 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 75], 00:22:07.355 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 95], 99.95th=[ 97], 00:22:07.355 | 99.99th=[ 111] 00:22:07.355 bw ( KiB/s): min=199680, max=339968, per=6.49%, avg=266444.80, stdev=35849.01, samples=20 00:22:07.355 iops : min= 780, max= 1328, avg=1040.80, stdev=140.04, samples=20 00:22:07.355 lat (msec) : 20=0.24%, 50=21.60%, 100=78.11%, 250=0.05% 00:22:07.355 cpu : usr=0.39%, sys=5.25%, ctx=2055, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=10471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 job10: (groupid=0, jobs=1): err= 0: pid=2246766: Fri Jul 26 22:08:16 2024 00:22:07.355 read: IOPS=1041, BW=260MiB/s (273MB/s)(2617MiB/10050msec) 00:22:07.355 slat (usec): min=16, max=27875, avg=950.65, stdev=2395.93 00:22:07.355 clat (msec): min=13, max=110, avg=60.42, stdev= 9.07 00:22:07.355 lat (msec): min=13, max=110, avg=61.37, stdev= 9.43 00:22:07.355 clat percentiles (msec): 00:22:07.355 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 47], 20.00th=[ 50], 00:22:07.355 | 30.00th=[ 62], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:22:07.355 | 70.00th=[ 64], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 77], 00:22:07.355 | 99.00th=[ 84], 99.50th=[ 87], 99.90th=[ 97], 99.95th=[ 109], 00:22:07.355 | 99.99th=[ 111] 00:22:07.355 bw ( KiB/s): min=207872, max=341504, per=6.48%, avg=266418.85, stdev=35661.18, samples=20 00:22:07.355 iops : min= 812, max= 1334, avg=1040.65, stdev=139.32, samples=20 00:22:07.355 lat (msec) : 20=0.20%, 50=22.05%, 100=77.67%, 250=0.09% 00:22:07.355 cpu : usr=0.40%, sys=5.14%, ctx=2085, majf=0, minf=4097 00:22:07.355 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:07.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:07.355 issued rwts: total=10469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.355 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:07.355 00:22:07.355 Run status group 0 (all jobs): 00:22:07.355 READ: bw=4012MiB/s (4207MB/s), 260MiB/s-472MiB/s (273MB/s-495MB/s), io=39.4GiB (42.3GB), run=10027-10051msec 00:22:07.355 00:22:07.355 Disk stats (read/write): 00:22:07.355 nvme0n1: ios=37365/0, merge=0/0, ticks=1221090/0, in_queue=1221090, util=96.96% 00:22:07.355 nvme10n1: ios=32409/0, merge=0/0, ticks=1221658/0, in_queue=1221658, util=97.14% 00:22:07.355 nvme1n1: ios=27364/0, merge=0/0, ticks=1221534/0, in_queue=1221534, util=97.47% 00:22:07.355 nvme2n1: ios=36999/0, merge=0/0, ticks=1222113/0, in_queue=1222113, util=97.65% 00:22:07.355 nvme3n1: ios=20784/0, merge=0/0, ticks=1223046/0, in_queue=1223046, util=97.76% 00:22:07.355 nvme4n1: ios=31139/0, merge=0/0, ticks=1221501/0, in_queue=1221501, util=98.15% 00:22:07.355 nvme5n1: ios=31132/0, merge=0/0, ticks=1221495/0, in_queue=1221495, util=98.33% 00:22:07.355 nvme6n1: ios=22639/0, merge=0/0, ticks=1222292/0, in_queue=1222292, util=98.46% 00:22:07.355 nvme7n1: ios=36997/0, merge=0/0, ticks=1221872/0, in_queue=1221872, util=98.91% 00:22:07.356 nvme8n1: ios=20612/0, merge=0/0, ticks=1222514/0, in_queue=1222514, util=99.13% 00:22:07.356 nvme9n1: ios=20623/0, merge=0/0, ticks=1223297/0, in_queue=1223297, util=99.29% 00:22:07.356 22:08:16 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:07.356 [global] 00:22:07.356 thread=1 00:22:07.356 invalidate=1 00:22:07.356 rw=randwrite 00:22:07.356 time_based=1 00:22:07.356 runtime=10 00:22:07.356 ioengine=libaio 00:22:07.356 direct=1 00:22:07.356 bs=262144 00:22:07.356 iodepth=64 00:22:07.356 norandommap=1 00:22:07.356 numjobs=1 00:22:07.356 00:22:07.356 [job0] 00:22:07.356 filename=/dev/nvme0n1 00:22:07.356 [job1] 00:22:07.356 filename=/dev/nvme10n1 00:22:07.356 [job2] 00:22:07.356 filename=/dev/nvme1n1 00:22:07.356 [job3] 00:22:07.356 filename=/dev/nvme2n1 00:22:07.356 [job4] 00:22:07.356 filename=/dev/nvme3n1 00:22:07.356 [job5] 00:22:07.356 filename=/dev/nvme4n1 00:22:07.356 [job6] 00:22:07.356 filename=/dev/nvme5n1 00:22:07.356 [job7] 00:22:07.356 filename=/dev/nvme6n1 00:22:07.356 [job8] 00:22:07.356 filename=/dev/nvme7n1 00:22:07.356 [job9] 00:22:07.356 filename=/dev/nvme8n1 00:22:07.356 [job10] 00:22:07.356 filename=/dev/nvme9n1 00:22:07.356 Could not set queue depth (nvme0n1) 00:22:07.356 Could not set queue depth (nvme10n1) 00:22:07.356 Could not set queue depth (nvme1n1) 00:22:07.356 Could not set queue depth (nvme2n1) 00:22:07.356 Could not set queue depth (nvme3n1) 00:22:07.356 Could not set queue depth (nvme4n1) 00:22:07.356 Could not set queue depth (nvme5n1) 00:22:07.356 Could not set queue depth (nvme6n1) 00:22:07.356 Could not set queue depth (nvme7n1) 00:22:07.356 Could not set queue depth (nvme8n1) 00:22:07.356 Could not set queue depth (nvme9n1) 00:22:07.356 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:07.356 fio-3.35 00:22:07.356 Starting 11 threads 00:22:17.321 00:22:17.321 job0: (groupid=0, jobs=1): err= 0: pid=2248511: Fri Jul 26 22:08:27 2024 00:22:17.321 write: IOPS=1696, BW=424MiB/s (445MB/s)(4282MiB/10093msec); 0 zone resets 00:22:17.321 slat (usec): min=18, max=65082, avg=577.68, stdev=1547.11 00:22:17.321 clat (msec): min=2, max=222, avg=37.13, stdev=25.65 00:22:17.321 lat (msec): min=2, max=222, avg=37.71, stdev=26.03 00:22:17.321 clat percentiles (msec): 00:22:17.321 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 18], 00:22:17.321 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 39], 00:22:17.321 | 70.00th=[ 52], 80.00th=[ 56], 90.00th=[ 73], 95.00th=[ 79], 00:22:17.321 | 99.00th=[ 130], 99.50th=[ 134], 99.90th=[ 201], 99.95th=[ 213], 00:22:17.321 | 99.99th=[ 224] 00:22:17.321 bw ( KiB/s): min=114688, max=924672, per=12.85%, avg=436812.80, stdev=278602.06, samples=20 00:22:17.321 iops : min= 448, max= 3612, avg=1706.30, stdev=1088.29, samples=20 00:22:17.321 lat (msec) : 4=0.06%, 10=0.10%, 20=53.51%, 50=12.45%, 100=31.89% 00:22:17.321 lat (msec) : 250=1.99% 00:22:17.321 cpu : usr=3.04%, sys=5.34%, ctx=3836, majf=0, minf=1 00:22:17.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:17.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.321 issued rwts: total=0,17126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.321 job1: (groupid=0, jobs=1): err= 0: pid=2248523: Fri Jul 26 22:08:27 2024 00:22:17.321 write: IOPS=685, BW=171MiB/s (180MB/s)(1728MiB/10086msec); 0 zone resets 00:22:17.321 slat (usec): min=25, max=43248, avg=1445.02, stdev=3597.56 00:22:17.321 clat (msec): min=18, max=218, avg=91.93, stdev=23.65 00:22:17.321 lat (msec): min=18, max=218, avg=93.38, stdev=24.16 00:22:17.321 clat percentiles (msec): 00:22:17.321 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 70], 00:22:17.321 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 91], 00:22:17.321 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 129], 00:22:17.321 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 194], 99.95th=[ 194], 00:22:17.321 | 99.99th=[ 220] 00:22:17.321 bw ( KiB/s): min=122880, max=229888, per=5.16%, avg=175279.95, stdev=42341.10, samples=20 00:22:17.321 iops : min= 480, max= 898, avg=684.65, stdev=165.35, samples=20 00:22:17.321 lat (msec) : 20=0.07%, 50=0.41%, 100=61.88%, 250=37.64% 00:22:17.321 cpu : usr=1.74%, sys=2.91%, ctx=1708, majf=0, minf=1 00:22:17.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:17.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.321 issued rwts: total=0,6910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.321 job2: (groupid=0, jobs=1): err= 0: pid=2248524: Fri Jul 26 22:08:27 2024 00:22:17.321 write: IOPS=697, BW=174MiB/s (183MB/s)(1758MiB/10080msec); 0 zone resets 00:22:17.321 slat (usec): min=23, max=52031, avg=1376.87, stdev=4029.70 00:22:17.321 clat (msec): min=13, max=242, avg=90.31, stdev=27.15 00:22:17.321 lat (msec): min=13, max=242, avg=91.69, stdev=27.75 00:22:17.321 clat percentiles (msec): 00:22:17.321 | 1.00th=[ 39], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 69], 00:22:17.321 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 93], 00:22:17.321 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 129], 00:22:17.321 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 199], 99.95th=[ 201], 00:22:17.321 | 99.99th=[ 243] 00:22:17.321 bw ( KiB/s): min=117760, max=295936, per=5.25%, avg=178432.00, stdev=50663.77, samples=20 00:22:17.321 iops : min= 460, max= 1156, avg=697.00, stdev=197.91, samples=20 00:22:17.321 lat (msec) : 20=0.17%, 50=1.92%, 100=61.13%, 250=36.78% 00:22:17.321 cpu : usr=1.72%, sys=3.03%, ctx=1822, majf=0, minf=1 00:22:17.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:17.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.321 issued rwts: total=0,7033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.321 job3: (groupid=0, jobs=1): err= 0: pid=2248525: Fri Jul 26 22:08:27 2024 00:22:17.321 write: IOPS=2233, BW=558MiB/s (586MB/s)(5598MiB/10024msec); 0 zone resets 00:22:17.321 slat (usec): min=16, max=5341, avg=444.23, stdev=868.45 00:22:17.321 clat (usec): min=7824, max=55259, avg=28197.54, stdev=11124.43 00:22:17.321 lat (usec): min=7872, max=56192, avg=28641.77, stdev=11283.94 00:22:17.321 clat percentiles (usec): 00:22:17.321 | 1.00th=[16712], 5.00th=[17433], 10.00th=[17695], 20.00th=[18220], 00:22:17.321 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19792], 60.00th=[34866], 00:22:17.321 | 70.00th=[36963], 80.00th=[37487], 90.00th=[39584], 95.00th=[50594], 00:22:17.321 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:22:17.321 | 99.99th=[54264] 00:22:17.321 bw ( KiB/s): min=320512, max=887296, per=16.82%, avg=571665.25, stdev=221949.33, samples=20 00:22:17.321 iops : min= 1252, max= 3466, avg=2233.05, stdev=867.00, samples=20 00:22:17.321 lat (msec) : 10=0.02%, 20=51.17%, 50=43.16%, 100=5.65% 00:22:17.321 cpu : usr=3.62%, sys=5.65%, ctx=4930, majf=0, minf=1 00:22:17.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:17.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.321 issued rwts: total=0,22392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.321 job4: (groupid=0, jobs=1): err= 0: pid=2248526: Fri Jul 26 22:08:27 2024 00:22:17.321 write: IOPS=683, BW=171MiB/s (179MB/s)(1723MiB/10078msec); 0 zone resets 00:22:17.321 slat (usec): min=27, max=38426, avg=1445.39, stdev=3597.78 00:22:17.321 clat (msec): min=12, max=225, avg=92.08, stdev=24.14 00:22:17.321 lat (msec): min=12, max=225, avg=93.53, stdev=24.64 00:22:17.321 clat percentiles (msec): 00:22:17.321 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 70], 00:22:17.321 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 92], 00:22:17.321 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 130], 00:22:17.321 | 99.00th=[ 148], 99.50th=[ 159], 99.90th=[ 201], 99.95th=[ 203], 00:22:17.321 | 99.99th=[ 226] 00:22:17.321 bw ( KiB/s): min=115200, max=231424, per=5.15%, avg=174870.20, stdev=42954.78, samples=20 00:22:17.321 iops : min= 450, max= 904, avg=683.05, stdev=167.75, samples=20 00:22:17.321 lat (msec) : 20=0.12%, 50=0.41%, 100=61.80%, 250=37.68% 00:22:17.321 cpu : usr=1.53%, sys=3.11%, ctx=1713, majf=0, minf=1 00:22:17.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:17.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.321 issued rwts: total=0,6893,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.321 job5: (groupid=0, jobs=1): err= 0: pid=2248527: Fri Jul 26 22:08:27 2024 00:22:17.321 write: IOPS=685, BW=171MiB/s (180MB/s)(1728MiB/10085msec); 0 zone resets 00:22:17.321 slat (usec): min=25, max=43382, avg=1441.45, stdev=3550.75 00:22:17.321 clat (msec): min=13, max=210, avg=91.90, stdev=23.93 00:22:17.321 lat (msec): min=13, max=210, avg=93.34, stdev=24.42 00:22:17.321 clat percentiles (msec): 00:22:17.321 | 1.00th=[ 67], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 70], 00:22:17.321 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 91], 00:22:17.321 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 129], 00:22:17.321 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 197], 99.95th=[ 197], 00:22:17.321 | 99.99th=[ 211] 00:22:17.321 bw ( KiB/s): min=120320, max=230400, per=5.16%, avg=175356.75, stdev=42474.25, samples=20 00:22:17.322 iops : min= 470, max= 900, avg=684.95, stdev=165.87, samples=20 00:22:17.322 lat (msec) : 20=0.12%, 50=0.35%, 100=61.94%, 250=37.60% 00:22:17.322 cpu : usr=1.56%, sys=3.00%, ctx=1705, majf=0, minf=1 00:22:17.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:17.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.322 issued rwts: total=0,6912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.322 job6: (groupid=0, jobs=1): err= 0: pid=2248528: Fri Jul 26 22:08:27 2024 00:22:17.322 write: IOPS=2246, BW=562MiB/s (589MB/s)(5624MiB/10011msec); 0 zone resets 00:22:17.322 slat (usec): min=17, max=14586, avg=438.68, stdev=1035.90 00:22:17.322 clat (usec): min=9957, max=98774, avg=28036.67, stdev=19131.44 00:22:17.322 lat (msec): min=11, max=100, avg=28.48, stdev=19.42 00:22:17.322 clat percentiles (usec): 00:22:17.322 | 1.00th=[14877], 5.00th=[15533], 10.00th=[15795], 20.00th=[16188], 00:22:17.322 | 30.00th=[16581], 40.00th=[17433], 50.00th=[18482], 60.00th=[19006], 00:22:17.322 | 70.00th=[33817], 80.00th=[37487], 90.00th=[64226], 95.00th=[73925], 00:22:17.322 | 99.00th=[91751], 99.50th=[93848], 99.90th=[94897], 99.95th=[95945], 00:22:17.322 | 99.99th=[98042] 00:22:17.322 bw ( KiB/s): min=189440, max=998400, per=16.29%, avg=553763.89, stdev=319361.34, samples=19 00:22:17.322 iops : min= 740, max= 3900, avg=2163.11, stdev=1247.54, samples=19 00:22:17.322 lat (msec) : 10=0.01%, 20=67.22%, 50=18.20%, 100=14.57% 00:22:17.322 cpu : usr=3.79%, sys=5.47%, ctx=4856, majf=0, minf=1 00:22:17.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:17.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.322 issued rwts: total=0,22494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.322 job7: (groupid=0, jobs=1): err= 0: pid=2248529: Fri Jul 26 22:08:27 2024 00:22:17.322 write: IOPS=772, BW=193MiB/s (202MB/s)(1947MiB/10082msec); 0 zone resets 00:22:17.322 slat (usec): min=21, max=58012, avg=1234.33, stdev=3605.68 00:22:17.322 clat (msec): min=11, max=214, avg=81.61, stdev=30.68 00:22:17.322 lat (msec): min=11, max=214, avg=82.84, stdev=31.28 00:22:17.322 clat percentiles (msec): 00:22:17.322 | 1.00th=[ 26], 5.00th=[ 39], 10.00th=[ 53], 20.00th=[ 55], 00:22:17.322 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 74], 60.00th=[ 80], 00:22:17.322 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 124], 95.00th=[ 128], 00:22:17.322 | 99.00th=[ 146], 99.50th=[ 171], 99.90th=[ 197], 99.95th=[ 215], 00:22:17.322 | 99.99th=[ 215] 00:22:17.322 bw ( KiB/s): min=119296, max=320000, per=5.82%, avg=197708.80, stdev=70423.69, samples=20 00:22:17.322 iops : min= 466, max= 1250, avg=772.30, stdev=275.09, samples=20 00:22:17.322 lat (msec) : 20=0.50%, 50=5.78%, 100=60.62%, 250=33.10% 00:22:17.322 cpu : usr=1.88%, sys=3.42%, ctx=2083, majf=0, minf=1 00:22:17.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:17.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.322 issued rwts: total=0,7786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.322 job8: (groupid=0, jobs=1): err= 0: pid=2248530: Fri Jul 26 22:08:27 2024 00:22:17.322 write: IOPS=1461, BW=365MiB/s (383MB/s)(3662MiB/10025msec); 0 zone resets 00:22:17.322 slat (usec): min=21, max=34372, avg=662.78, stdev=1281.07 00:22:17.322 clat (usec): min=833, max=121341, avg=43124.53, stdev=12152.79 00:22:17.322 lat (usec): min=889, max=121393, avg=43787.31, stdev=12304.90 00:22:17.322 clat percentiles (msec): 00:22:17.322 | 1.00th=[ 28], 5.00th=[ 34], 10.00th=[ 35], 20.00th=[ 36], 00:22:17.322 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:22:17.322 | 70.00th=[ 50], 80.00th=[ 53], 90.00th=[ 56], 95.00th=[ 58], 00:22:17.322 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 106], 99.95th=[ 111], 00:22:17.322 | 99.99th=[ 122] 00:22:17.322 bw ( KiB/s): min=238592, max=470016, per=10.99%, avg=373393.30, stdev=72605.74, samples=20 00:22:17.322 iops : min= 932, max= 1836, avg=1458.55, stdev=283.60, samples=20 00:22:17.322 lat (usec) : 1000=0.01% 00:22:17.322 lat (msec) : 2=0.07%, 4=0.16%, 10=0.05%, 20=0.27%, 50=71.73% 00:22:17.322 lat (msec) : 100=27.57%, 250=0.14% 00:22:17.322 cpu : usr=3.09%, sys=5.00%, ctx=3644, majf=0, minf=1 00:22:17.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:17.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.322 issued rwts: total=0,14647,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.322 job9: (groupid=0, jobs=1): err= 0: pid=2248531: Fri Jul 26 22:08:27 2024 00:22:17.322 write: IOPS=685, BW=171MiB/s (180MB/s)(1730MiB/10087msec); 0 zone resets 00:22:17.322 slat (usec): min=20, max=42730, avg=1443.47, stdev=3578.32 00:22:17.322 clat (msec): min=4, max=212, avg=91.84, stdev=24.07 00:22:17.322 lat (msec): min=4, max=212, avg=93.29, stdev=24.57 00:22:17.322 clat percentiles (msec): 00:22:17.322 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 70], 00:22:17.322 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 87], 60.00th=[ 91], 00:22:17.322 | 70.00th=[ 111], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 129], 00:22:17.322 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 192], 99.95th=[ 192], 00:22:17.322 | 99.99th=[ 213] 00:22:17.322 bw ( KiB/s): min=121856, max=230400, per=5.16%, avg=175462.40, stdev=42487.10, samples=20 00:22:17.322 iops : min= 476, max= 900, avg=685.40, stdev=165.97, samples=20 00:22:17.322 lat (msec) : 10=0.14%, 20=0.13%, 50=0.39%, 100=61.51%, 250=37.83% 00:22:17.322 cpu : usr=1.66%, sys=2.97%, ctx=1731, majf=0, minf=1 00:22:17.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:17.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.322 issued rwts: total=0,6918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.322 job10: (groupid=0, jobs=1): err= 0: pid=2248532: Fri Jul 26 22:08:27 2024 00:22:17.322 write: IOPS=1475, BW=369MiB/s (387MB/s)(3720MiB/10086msec); 0 zone resets 00:22:17.322 slat (usec): min=17, max=74539, avg=653.95, stdev=2671.55 00:22:17.322 clat (msec): min=4, max=221, avg=42.70, stdev=36.11 00:22:17.322 lat (msec): min=4, max=222, avg=43.36, stdev=36.71 00:22:17.322 clat percentiles (msec): 00:22:17.322 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:22:17.322 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 33], 60.00th=[ 36], 00:22:17.322 | 70.00th=[ 39], 80.00th=[ 55], 90.00th=[ 117], 95.00th=[ 124], 00:22:17.322 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 182], 99.95th=[ 205], 00:22:17.322 | 99.99th=[ 222] 00:22:17.322 bw ( KiB/s): min=122880, max=879104, per=11.16%, avg=379428.35, stdev=277647.52, samples=20 00:22:17.322 iops : min= 480, max= 3434, avg=1482.10, stdev=1084.48, samples=20 00:22:17.322 lat (msec) : 10=0.26%, 20=46.25%, 50=29.14%, 100=9.20%, 250=15.15% 00:22:17.322 cpu : usr=2.49%, sys=4.05%, ctx=3284, majf=0, minf=1 00:22:17.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:17.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.322 issued rwts: total=0,14881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.322 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.322 00:22:17.322 Run status group 0 (all jobs): 00:22:17.322 WRITE: bw=3319MiB/s (3480MB/s), 171MiB/s-562MiB/s (179MB/s-589MB/s), io=32.7GiB (35.1GB), run=10011-10093msec 00:22:17.322 00:22:17.322 Disk stats (read/write): 00:22:17.322 nvme0n1: ios=49/34023, merge=0/0, ticks=8/1218499, in_queue=1218507, util=96.76% 00:22:17.322 nvme10n1: ios=0/13579, merge=0/0, ticks=0/1211402, in_queue=1211402, util=96.85% 00:22:17.322 nvme1n1: ios=0/13825, merge=0/0, ticks=0/1207201, in_queue=1207201, util=97.20% 00:22:17.322 nvme2n1: ios=0/44192, merge=0/0, ticks=0/1225250, in_queue=1225250, util=97.38% 00:22:17.322 nvme3n1: ios=0/13559, merge=0/0, ticks=0/1208550, in_queue=1208550, util=97.48% 00:22:17.322 nvme4n1: ios=0/13584, merge=0/0, ticks=0/1210426, in_queue=1210426, util=97.86% 00:22:17.322 nvme5n1: ios=0/43929, merge=0/0, ticks=0/1226091, in_queue=1226091, util=98.04% 00:22:17.322 nvme6n1: ios=0/15329, merge=0/0, ticks=0/1209883, in_queue=1209883, util=98.18% 00:22:17.322 nvme7n1: ios=0/28701, merge=0/0, ticks=0/1220395, in_queue=1220395, util=98.63% 00:22:17.322 nvme8n1: ios=0/13590, merge=0/0, ticks=0/1209036, in_queue=1209036, util=98.85% 00:22:17.322 nvme9n1: ios=0/29515, merge=0/0, ticks=0/1217886, in_queue=1217886, util=99.00% 00:22:17.322 22:08:27 -- target/multiconnection.sh@36 -- # sync 00:22:17.322 22:08:27 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:17.322 22:08:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.322 22:08:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:17.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:17.888 22:08:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:17.888 22:08:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:17.888 22:08:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:17.888 22:08:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:17.888 22:08:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:17.888 22:08:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:17.888 22:08:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:17.888 22:08:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.888 22:08:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.888 22:08:28 -- common/autotest_common.sh@10 -- # set +x 00:22:17.888 22:08:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.888 22:08:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.888 22:08:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:18.820 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:18.820 22:08:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:18.820 22:08:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:18.820 22:08:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:18.820 22:08:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:18.820 22:08:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:18.820 22:08:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:18.820 22:08:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:18.820 22:08:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:18.820 22:08:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:18.820 22:08:29 -- common/autotest_common.sh@10 -- # set +x 00:22:18.820 22:08:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:18.820 22:08:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.820 22:08:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:19.753 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:19.753 22:08:30 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:19.753 22:08:30 -- common/autotest_common.sh@1198 -- # local i=0 00:22:19.753 22:08:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:19.753 22:08:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:19.753 22:08:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:19.753 22:08:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:19.753 22:08:30 -- common/autotest_common.sh@1210 -- # return 0 00:22:19.753 22:08:30 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:19.753 22:08:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:19.753 22:08:30 -- common/autotest_common.sh@10 -- # set +x 00:22:19.753 22:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:19.753 22:08:30 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:19.753 22:08:30 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:20.683 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:20.683 22:08:31 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:20.683 22:08:31 -- common/autotest_common.sh@1198 -- # local i=0 00:22:20.940 22:08:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:20.940 22:08:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:20.940 22:08:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:20.940 22:08:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:20.940 22:08:31 -- common/autotest_common.sh@1210 -- # return 0 00:22:20.940 22:08:31 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:20.940 22:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.940 22:08:31 -- common/autotest_common.sh@10 -- # set +x 00:22:20.940 22:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.940 22:08:31 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.940 22:08:31 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:21.872 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:21.872 22:08:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:21.872 22:08:32 -- common/autotest_common.sh@1198 -- # local i=0 00:22:21.872 22:08:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:21.872 22:08:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:21.872 22:08:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:21.872 22:08:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:21.872 22:08:32 -- common/autotest_common.sh@1210 -- # return 0 00:22:21.872 22:08:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:21.872 22:08:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:21.872 22:08:32 -- common/autotest_common.sh@10 -- # set +x 00:22:21.872 22:08:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:21.872 22:08:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.873 22:08:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:22.803 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:22.803 22:08:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:22.803 22:08:33 -- common/autotest_common.sh@1198 -- # local i=0 00:22:22.803 22:08:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:22.803 22:08:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:22.803 22:08:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:22.803 22:08:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:22.803 22:08:33 -- common/autotest_common.sh@1210 -- # return 0 00:22:22.803 22:08:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:22.803 22:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:22.803 22:08:33 -- common/autotest_common.sh@10 -- # set +x 00:22:22.803 22:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:22.803 22:08:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:22.803 22:08:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:23.734 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:23.734 22:08:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:23.734 22:08:34 -- common/autotest_common.sh@1198 -- # local i=0 00:22:23.734 22:08:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:23.734 22:08:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:23.734 22:08:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:23.734 22:08:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:23.734 22:08:34 -- common/autotest_common.sh@1210 -- # return 0 00:22:23.734 22:08:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:23.734 22:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.734 22:08:34 -- common/autotest_common.sh@10 -- # set +x 00:22:23.734 22:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.734 22:08:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.734 22:08:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:24.679 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:24.679 22:08:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:24.952 22:08:35 -- common/autotest_common.sh@1198 -- # local i=0 00:22:24.952 22:08:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:24.952 22:08:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:24.952 22:08:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:24.952 22:08:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:24.952 22:08:35 -- common/autotest_common.sh@1210 -- # return 0 00:22:24.952 22:08:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:24.952 22:08:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.952 22:08:35 -- common/autotest_common.sh@10 -- # set +x 00:22:24.952 22:08:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.952 22:08:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.952 22:08:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:25.883 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:25.883 22:08:36 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:25.883 22:08:36 -- common/autotest_common.sh@1198 -- # local i=0 00:22:25.883 22:08:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:25.883 22:08:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:25.883 22:08:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:25.883 22:08:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:25.883 22:08:36 -- common/autotest_common.sh@1210 -- # return 0 00:22:25.883 22:08:36 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:25.883 22:08:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:25.883 22:08:36 -- common/autotest_common.sh@10 -- # set +x 00:22:25.883 22:08:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:25.883 22:08:36 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:25.883 22:08:36 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:26.814 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:26.814 22:08:37 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:26.814 22:08:37 -- common/autotest_common.sh@1198 -- # local i=0 00:22:26.814 22:08:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:26.814 22:08:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:26.814 22:08:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:26.814 22:08:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:26.814 22:08:37 -- common/autotest_common.sh@1210 -- # return 0 00:22:26.814 22:08:37 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:26.814 22:08:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:26.814 22:08:37 -- common/autotest_common.sh@10 -- # set +x 00:22:26.814 22:08:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:26.814 22:08:37 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.814 22:08:37 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:27.746 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:27.746 22:08:38 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:27.746 22:08:38 -- common/autotest_common.sh@1198 -- # local i=0 00:22:27.746 22:08:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:27.746 22:08:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:27.746 22:08:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:27.746 22:08:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:27.746 22:08:38 -- common/autotest_common.sh@1210 -- # return 0 00:22:27.746 22:08:38 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:27.746 22:08:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:27.746 22:08:38 -- common/autotest_common.sh@10 -- # set +x 00:22:27.746 22:08:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:27.746 22:08:38 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:27.746 22:08:38 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:27.746 22:08:38 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:27.746 22:08:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:27.746 22:08:38 -- nvmf/common.sh@116 -- # sync 00:22:27.746 22:08:38 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:27.746 22:08:38 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:27.746 22:08:38 -- nvmf/common.sh@119 -- # set +e 00:22:27.746 22:08:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:27.746 22:08:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:27.746 rmmod nvme_rdma 00:22:27.746 rmmod nvme_fabrics 00:22:27.747 22:08:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:27.747 22:08:38 -- nvmf/common.sh@123 -- # set -e 00:22:27.747 22:08:38 -- nvmf/common.sh@124 -- # return 0 00:22:27.747 22:08:38 -- nvmf/common.sh@477 -- # '[' -n 2240418 ']' 00:22:27.747 22:08:38 -- nvmf/common.sh@478 -- # killprocess 2240418 00:22:27.747 22:08:38 -- common/autotest_common.sh@926 -- # '[' -z 2240418 ']' 00:22:27.747 22:08:38 -- common/autotest_common.sh@930 -- # kill -0 2240418 00:22:27.747 22:08:38 -- common/autotest_common.sh@931 -- # uname 00:22:27.747 22:08:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:28.004 22:08:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2240418 00:22:28.004 22:08:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:28.004 22:08:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:28.004 22:08:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2240418' 00:22:28.004 killing process with pid 2240418 00:22:28.004 22:08:39 -- common/autotest_common.sh@945 -- # kill 2240418 00:22:28.004 22:08:39 -- common/autotest_common.sh@950 -- # wait 2240418 00:22:28.263 22:08:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:28.263 22:08:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:28.263 00:22:28.263 real 1m16.451s 00:22:28.263 user 4m54.295s 00:22:28.263 sys 0m20.851s 00:22:28.263 22:08:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:28.263 22:08:39 -- common/autotest_common.sh@10 -- # set +x 00:22:28.263 ************************************ 00:22:28.263 END TEST nvmf_multiconnection 00:22:28.263 ************************************ 00:22:28.521 22:08:39 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:28.521 22:08:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:28.521 22:08:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:28.521 22:08:39 -- common/autotest_common.sh@10 -- # set +x 00:22:28.521 ************************************ 00:22:28.521 START TEST nvmf_initiator_timeout 00:22:28.521 ************************************ 00:22:28.521 22:08:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:28.521 * Looking for test storage... 00:22:28.521 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:28.521 22:08:39 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:28.521 22:08:39 -- nvmf/common.sh@7 -- # uname -s 00:22:28.521 22:08:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:28.521 22:08:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:28.521 22:08:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:28.521 22:08:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:28.521 22:08:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:28.521 22:08:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:28.521 22:08:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:28.521 22:08:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:28.521 22:08:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:28.521 22:08:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:28.521 22:08:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:28.521 22:08:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:28.521 22:08:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:28.521 22:08:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:28.521 22:08:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:28.521 22:08:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:28.521 22:08:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:28.521 22:08:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:28.521 22:08:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:28.521 22:08:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.522 22:08:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.522 22:08:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.522 22:08:39 -- paths/export.sh@5 -- # export PATH 00:22:28.522 22:08:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:28.522 22:08:39 -- nvmf/common.sh@46 -- # : 0 00:22:28.522 22:08:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:28.522 22:08:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:28.522 22:08:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:28.522 22:08:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:28.522 22:08:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:28.522 22:08:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:28.522 22:08:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:28.522 22:08:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:28.522 22:08:39 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:28.522 22:08:39 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:28.522 22:08:39 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:28.522 22:08:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:28.522 22:08:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:28.522 22:08:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:28.522 22:08:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:28.522 22:08:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:28.522 22:08:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.522 22:08:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:28.522 22:08:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:28.522 22:08:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:28.522 22:08:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:28.522 22:08:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:28.522 22:08:39 -- common/autotest_common.sh@10 -- # set +x 00:22:36.634 22:08:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:36.634 22:08:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:36.634 22:08:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:36.634 22:08:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:36.634 22:08:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:36.634 22:08:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:36.634 22:08:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:36.634 22:08:47 -- nvmf/common.sh@294 -- # net_devs=() 00:22:36.634 22:08:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:36.634 22:08:47 -- nvmf/common.sh@295 -- # e810=() 00:22:36.634 22:08:47 -- nvmf/common.sh@295 -- # local -ga e810 00:22:36.634 22:08:47 -- nvmf/common.sh@296 -- # x722=() 00:22:36.634 22:08:47 -- nvmf/common.sh@296 -- # local -ga x722 00:22:36.634 22:08:47 -- nvmf/common.sh@297 -- # mlx=() 00:22:36.634 22:08:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:36.634 22:08:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.634 22:08:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:36.634 22:08:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:36.634 22:08:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:36.634 22:08:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:36.634 22:08:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:36.634 22:08:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:36.634 22:08:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:36.634 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:36.634 22:08:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.634 22:08:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:36.634 22:08:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:36.634 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:36.634 22:08:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:36.634 22:08:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:36.634 22:08:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:36.634 22:08:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.634 22:08:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:36.634 22:08:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.634 22:08:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:36.634 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:36.634 22:08:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.634 22:08:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:36.634 22:08:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.634 22:08:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:36.634 22:08:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.634 22:08:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:36.634 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:36.634 22:08:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.634 22:08:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:36.634 22:08:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:36.634 22:08:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:36.634 22:08:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:36.634 22:08:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:36.634 22:08:47 -- nvmf/common.sh@57 -- # uname 00:22:36.634 22:08:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:36.634 22:08:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:36.634 22:08:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:36.634 22:08:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:36.634 22:08:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:36.634 22:08:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:36.634 22:08:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:36.635 22:08:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:36.635 22:08:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:36.635 22:08:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:36.635 22:08:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:36.635 22:08:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.635 22:08:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:36.635 22:08:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:36.635 22:08:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.635 22:08:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:36.635 22:08:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@104 -- # continue 2 00:22:36.635 22:08:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@104 -- # continue 2 00:22:36.635 22:08:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:36.635 22:08:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:36.635 22:08:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:36.635 22:08:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:36.635 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.635 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:36.635 altname enp217s0f0np0 00:22:36.635 altname ens818f0np0 00:22:36.635 inet 192.168.100.8/24 scope global mlx_0_0 00:22:36.635 valid_lft forever preferred_lft forever 00:22:36.635 22:08:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:36.635 22:08:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:36.635 22:08:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:36.635 22:08:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:36.635 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:36.635 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:36.635 altname enp217s0f1np1 00:22:36.635 altname ens818f1np1 00:22:36.635 inet 192.168.100.9/24 scope global mlx_0_1 00:22:36.635 valid_lft forever preferred_lft forever 00:22:36.635 22:08:47 -- nvmf/common.sh@410 -- # return 0 00:22:36.635 22:08:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:36.635 22:08:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:36.635 22:08:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:36.635 22:08:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:36.635 22:08:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:36.635 22:08:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:36.635 22:08:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:36.635 22:08:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:36.635 22:08:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:36.635 22:08:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@104 -- # continue 2 00:22:36.635 22:08:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:36.635 22:08:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:36.635 22:08:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@104 -- # continue 2 00:22:36.635 22:08:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:36.635 22:08:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:36.635 22:08:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:36.635 22:08:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:36.635 22:08:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:36.635 22:08:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:36.635 192.168.100.9' 00:22:36.635 22:08:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:36.635 192.168.100.9' 00:22:36.635 22:08:47 -- nvmf/common.sh@445 -- # head -n 1 00:22:36.635 22:08:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:36.635 22:08:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:36.635 192.168.100.9' 00:22:36.635 22:08:47 -- nvmf/common.sh@446 -- # tail -n +2 00:22:36.635 22:08:47 -- nvmf/common.sh@446 -- # head -n 1 00:22:36.635 22:08:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:36.635 22:08:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:36.635 22:08:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:36.635 22:08:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:36.635 22:08:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:36.635 22:08:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:36.894 22:08:47 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:36.894 22:08:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:36.894 22:08:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:36.894 22:08:47 -- common/autotest_common.sh@10 -- # set +x 00:22:36.894 22:08:47 -- nvmf/common.sh@469 -- # nvmfpid=2256053 00:22:36.894 22:08:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:36.894 22:08:47 -- nvmf/common.sh@470 -- # waitforlisten 2256053 00:22:36.894 22:08:47 -- common/autotest_common.sh@819 -- # '[' -z 2256053 ']' 00:22:36.894 22:08:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.894 22:08:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:36.894 22:08:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.894 22:08:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:36.894 22:08:47 -- common/autotest_common.sh@10 -- # set +x 00:22:36.894 [2024-07-26 22:08:47.920243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:36.894 [2024-07-26 22:08:47.920303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.894 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.894 [2024-07-26 22:08:48.006559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:36.894 [2024-07-26 22:08:48.046136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:36.894 [2024-07-26 22:08:48.046241] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.894 [2024-07-26 22:08:48.046251] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.894 [2024-07-26 22:08:48.046260] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.894 [2024-07-26 22:08:48.046304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.894 [2024-07-26 22:08:48.046413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.894 [2024-07-26 22:08:48.046497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.894 [2024-07-26 22:08:48.046498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.830 22:08:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:37.830 22:08:48 -- common/autotest_common.sh@852 -- # return 0 00:22:37.830 22:08:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:37.830 22:08:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 22:08:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:37.830 22:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 Malloc0 00:22:37.830 22:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:37.830 22:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 Delay0 00:22:37.830 22:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:37.830 22:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 [2024-07-26 22:08:48.824402] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bf8580/0x1d31080) succeed. 00:22:37.830 [2024-07-26 22:08:48.835068] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca96f0/0x1c10f80) succeed. 00:22:37.830 22:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:37.830 22:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 22:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:37.830 22:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 22:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:37.830 22:08:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:37.830 22:08:48 -- common/autotest_common.sh@10 -- # set +x 00:22:37.830 [2024-07-26 22:08:48.978682] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:37.830 22:08:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:37.830 22:08:48 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:38.765 22:08:49 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:38.765 22:08:49 -- common/autotest_common.sh@1177 -- # local i=0 00:22:38.765 22:08:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:38.765 22:08:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:38.765 22:08:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:41.296 22:08:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:41.297 22:08:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:41.297 22:08:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:41.297 22:08:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:41.297 22:08:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:41.297 22:08:51 -- common/autotest_common.sh@1187 -- # return 0 00:22:41.297 22:08:51 -- target/initiator_timeout.sh@35 -- # fio_pid=2256718 00:22:41.297 22:08:51 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:41.297 22:08:51 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:41.297 [global] 00:22:41.297 thread=1 00:22:41.297 invalidate=1 00:22:41.297 rw=write 00:22:41.297 time_based=1 00:22:41.297 runtime=60 00:22:41.297 ioengine=libaio 00:22:41.297 direct=1 00:22:41.297 bs=4096 00:22:41.297 iodepth=1 00:22:41.297 norandommap=0 00:22:41.297 numjobs=1 00:22:41.297 00:22:41.297 verify_dump=1 00:22:41.297 verify_backlog=512 00:22:41.297 verify_state_save=0 00:22:41.297 do_verify=1 00:22:41.297 verify=crc32c-intel 00:22:41.297 [job0] 00:22:41.297 filename=/dev/nvme0n1 00:22:41.297 Could not set queue depth (nvme0n1) 00:22:41.297 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:41.297 fio-3.35 00:22:41.297 Starting 1 thread 00:22:43.838 22:08:54 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:43.838 22:08:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.838 22:08:54 -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 true 00:22:43.838 22:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.838 22:08:55 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:43.838 22:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.838 22:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 true 00:22:43.838 22:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.838 22:08:55 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:43.838 22:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.838 22:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 true 00:22:43.838 22:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.838 22:08:55 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:43.838 22:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.838 22:08:55 -- common/autotest_common.sh@10 -- # set +x 00:22:43.838 true 00:22:43.838 22:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.838 22:08:55 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:47.116 22:08:58 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:47.116 22:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.116 22:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:47.116 true 00:22:47.116 22:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.116 22:08:58 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:47.116 22:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.116 22:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:47.116 true 00:22:47.116 22:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.116 22:08:58 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:47.116 22:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.116 22:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:47.116 true 00:22:47.116 22:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.116 22:08:58 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:47.116 22:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.116 22:08:58 -- common/autotest_common.sh@10 -- # set +x 00:22:47.116 true 00:22:47.116 22:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.116 22:08:58 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:47.116 22:08:58 -- target/initiator_timeout.sh@54 -- # wait 2256718 00:23:43.352 00:23:43.352 job0: (groupid=0, jobs=1): err= 0: pid=2256985: Fri Jul 26 22:09:52 2024 00:23:43.352 read: IOPS=1288, BW=5154KiB/s (5278kB/s)(302MiB/60000msec) 00:23:43.353 slat (usec): min=2, max=17234, avg= 8.84, stdev=84.52 00:23:43.353 clat (usec): min=63, max=42405k, avg=651.84, stdev=152507.31 00:23:43.353 lat (usec): min=84, max=42405k, avg=660.68, stdev=152507.34 00:23:43.353 clat percentiles (usec): 00:23:43.353 | 1.00th=[ 90], 5.00th=[ 93], 10.00th=[ 95], 20.00th=[ 98], 00:23:43.353 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 105], 00:23:43.353 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 116], 00:23:43.353 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 130], 99.95th=[ 155], 00:23:43.353 | 99.99th=[ 302] 00:23:43.353 write: IOPS=1295, BW=5183KiB/s (5307kB/s)(304MiB/60000msec); 0 zone resets 00:23:43.353 slat (usec): min=3, max=283, avg=10.39, stdev= 3.03 00:23:43.353 clat (usec): min=68, max=708, avg=100.34, stdev= 7.84 00:23:43.353 lat (usec): min=82, max=720, avg=110.73, stdev= 8.93 00:23:43.353 clat percentiles (usec): 00:23:43.353 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:23:43.353 | 30.00th=[ 97], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:23:43.353 | 70.00th=[ 104], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 113], 00:23:43.353 | 99.00th=[ 119], 99.50th=[ 121], 99.90th=[ 131], 99.95th=[ 145], 00:23:43.353 | 99.99th=[ 285] 00:23:43.353 bw ( KiB/s): min= 4032, max=20056, per=100.00%, avg=17319.09, stdev=2532.82, samples=35 00:23:43.353 iops : min= 1008, max= 5014, avg=4329.77, stdev=633.21, samples=35 00:23:43.353 lat (usec) : 100=41.99%, 250=57.99%, 500=0.02%, 750=0.01% 00:23:43.353 lat (msec) : 2=0.01%, >=2000=0.01% 00:23:43.353 cpu : usr=1.57%, sys=3.33%, ctx=155058, majf=0, minf=105 00:23:43.353 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:43.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:43.353 issued rwts: total=77312,77739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:43.353 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:43.353 00:23:43.353 Run status group 0 (all jobs): 00:23:43.353 READ: bw=5154KiB/s (5278kB/s), 5154KiB/s-5154KiB/s (5278kB/s-5278kB/s), io=302MiB (317MB), run=60000-60000msec 00:23:43.353 WRITE: bw=5183KiB/s (5307kB/s), 5183KiB/s-5183KiB/s (5307kB/s-5307kB/s), io=304MiB (318MB), run=60000-60000msec 00:23:43.353 00:23:43.353 Disk stats (read/write): 00:23:43.353 nvme0n1: ios=77134/77312, merge=0/0, ticks=7349/7342, in_queue=14691, util=99.74% 00:23:43.353 22:09:52 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:43.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:43.353 22:09:53 -- common/autotest_common.sh@1198 -- # local i=0 00:23:43.353 22:09:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:43.353 22:09:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:43.353 22:09:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:43.353 22:09:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:43.353 22:09:53 -- common/autotest_common.sh@1210 -- # return 0 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:43.353 nvmf hotplug test: fio successful as expected 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:43.353 22:09:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:43.353 22:09:53 -- common/autotest_common.sh@10 -- # set +x 00:23:43.353 22:09:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:43.353 22:09:53 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:43.353 22:09:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:43.353 22:09:53 -- nvmf/common.sh@116 -- # sync 00:23:43.353 22:09:53 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:43.353 22:09:53 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:43.353 22:09:53 -- nvmf/common.sh@119 -- # set +e 00:23:43.353 22:09:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:43.353 22:09:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:43.353 rmmod nvme_rdma 00:23:43.353 rmmod nvme_fabrics 00:23:43.353 22:09:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:43.353 22:09:53 -- nvmf/common.sh@123 -- # set -e 00:23:43.353 22:09:53 -- nvmf/common.sh@124 -- # return 0 00:23:43.353 22:09:53 -- nvmf/common.sh@477 -- # '[' -n 2256053 ']' 00:23:43.353 22:09:53 -- nvmf/common.sh@478 -- # killprocess 2256053 00:23:43.353 22:09:53 -- common/autotest_common.sh@926 -- # '[' -z 2256053 ']' 00:23:43.353 22:09:53 -- common/autotest_common.sh@930 -- # kill -0 2256053 00:23:43.353 22:09:53 -- common/autotest_common.sh@931 -- # uname 00:23:43.353 22:09:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:43.353 22:09:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2256053 00:23:43.353 22:09:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:43.353 22:09:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:43.353 22:09:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2256053' 00:23:43.353 killing process with pid 2256053 00:23:43.353 22:09:53 -- common/autotest_common.sh@945 -- # kill 2256053 00:23:43.353 22:09:53 -- common/autotest_common.sh@950 -- # wait 2256053 00:23:43.353 22:09:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:43.353 22:09:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:43.353 00:23:43.353 real 1m14.358s 00:23:43.353 user 4m33.447s 00:23:43.353 sys 0m8.917s 00:23:43.353 22:09:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.353 22:09:53 -- common/autotest_common.sh@10 -- # set +x 00:23:43.353 ************************************ 00:23:43.353 END TEST nvmf_initiator_timeout 00:23:43.353 ************************************ 00:23:43.353 22:09:53 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:43.353 22:09:53 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:43.353 22:09:53 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:43.353 22:09:53 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:43.353 22:09:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:43.353 22:09:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:43.353 22:09:53 -- common/autotest_common.sh@10 -- # set +x 00:23:43.353 ************************************ 00:23:43.353 START TEST nvmf_shutdown 00:23:43.353 ************************************ 00:23:43.353 22:09:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:43.353 * Looking for test storage... 00:23:43.353 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:43.353 22:09:54 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.353 22:09:54 -- nvmf/common.sh@7 -- # uname -s 00:23:43.353 22:09:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.353 22:09:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.353 22:09:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.353 22:09:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.353 22:09:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.353 22:09:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.353 22:09:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.353 22:09:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.353 22:09:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.353 22:09:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.353 22:09:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:43.353 22:09:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:43.353 22:09:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.353 22:09:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.353 22:09:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.353 22:09:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:43.353 22:09:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.353 22:09:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.353 22:09:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.353 22:09:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.353 22:09:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.353 22:09:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.353 22:09:54 -- paths/export.sh@5 -- # export PATH 00:23:43.353 22:09:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.353 22:09:54 -- nvmf/common.sh@46 -- # : 0 00:23:43.353 22:09:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:43.353 22:09:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:43.353 22:09:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:43.354 22:09:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.354 22:09:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.354 22:09:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:43.354 22:09:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:43.354 22:09:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:43.354 22:09:54 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.354 22:09:54 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.354 22:09:54 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:43.354 22:09:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:43.354 22:09:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:43.354 22:09:54 -- common/autotest_common.sh@10 -- # set +x 00:23:43.354 ************************************ 00:23:43.354 START TEST nvmf_shutdown_tc1 00:23:43.354 ************************************ 00:23:43.354 22:09:54 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:23:43.354 22:09:54 -- target/shutdown.sh@74 -- # starttarget 00:23:43.354 22:09:54 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:43.354 22:09:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:43.354 22:09:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.354 22:09:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:43.354 22:09:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:43.354 22:09:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:43.354 22:09:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.354 22:09:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.354 22:09:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.354 22:09:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:43.354 22:09:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:43.354 22:09:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:43.354 22:09:54 -- common/autotest_common.sh@10 -- # set +x 00:23:51.463 22:10:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:51.463 22:10:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:51.463 22:10:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:51.463 22:10:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:51.463 22:10:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:51.463 22:10:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:51.463 22:10:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:51.463 22:10:02 -- nvmf/common.sh@294 -- # net_devs=() 00:23:51.463 22:10:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:51.463 22:10:02 -- nvmf/common.sh@295 -- # e810=() 00:23:51.463 22:10:02 -- nvmf/common.sh@295 -- # local -ga e810 00:23:51.463 22:10:02 -- nvmf/common.sh@296 -- # x722=() 00:23:51.463 22:10:02 -- nvmf/common.sh@296 -- # local -ga x722 00:23:51.463 22:10:02 -- nvmf/common.sh@297 -- # mlx=() 00:23:51.463 22:10:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:51.463 22:10:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.463 22:10:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:51.463 22:10:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:51.463 22:10:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:51.463 22:10:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:51.463 22:10:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:51.463 22:10:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:51.463 22:10:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:51.463 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:51.463 22:10:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:51.463 22:10:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:51.463 22:10:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:51.463 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:51.463 22:10:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:51.463 22:10:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:51.463 22:10:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:51.463 22:10:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.463 22:10:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:51.463 22:10:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.463 22:10:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:51.463 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:51.463 22:10:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.463 22:10:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:51.463 22:10:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.463 22:10:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:51.463 22:10:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.463 22:10:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:51.463 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:51.463 22:10:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.463 22:10:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:51.463 22:10:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:51.463 22:10:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:51.463 22:10:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:51.463 22:10:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:51.463 22:10:02 -- nvmf/common.sh@57 -- # uname 00:23:51.463 22:10:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:51.463 22:10:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:51.463 22:10:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:51.463 22:10:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:51.463 22:10:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:51.464 22:10:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:51.464 22:10:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:51.464 22:10:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:51.464 22:10:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:51.464 22:10:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:51.464 22:10:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:51.464 22:10:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:51.464 22:10:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:51.464 22:10:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:51.464 22:10:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:51.464 22:10:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:51.464 22:10:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@104 -- # continue 2 00:23:51.464 22:10:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@104 -- # continue 2 00:23:51.464 22:10:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:51.464 22:10:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.464 22:10:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:51.464 22:10:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:51.464 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:51.464 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:51.464 altname enp217s0f0np0 00:23:51.464 altname ens818f0np0 00:23:51.464 inet 192.168.100.8/24 scope global mlx_0_0 00:23:51.464 valid_lft forever preferred_lft forever 00:23:51.464 22:10:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:51.464 22:10:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.464 22:10:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:51.464 22:10:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:51.464 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:51.464 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:51.464 altname enp217s0f1np1 00:23:51.464 altname ens818f1np1 00:23:51.464 inet 192.168.100.9/24 scope global mlx_0_1 00:23:51.464 valid_lft forever preferred_lft forever 00:23:51.464 22:10:02 -- nvmf/common.sh@410 -- # return 0 00:23:51.464 22:10:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:51.464 22:10:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:51.464 22:10:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:51.464 22:10:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:51.464 22:10:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:51.464 22:10:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:51.464 22:10:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:51.464 22:10:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:51.464 22:10:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:51.464 22:10:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@104 -- # continue 2 00:23:51.464 22:10:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:51.464 22:10:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:51.464 22:10:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@104 -- # continue 2 00:23:51.464 22:10:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:51.464 22:10:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.464 22:10:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:51.464 22:10:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:51.464 22:10:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:51.464 22:10:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:51.464 192.168.100.9' 00:23:51.464 22:10:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:51.464 192.168.100.9' 00:23:51.464 22:10:02 -- nvmf/common.sh@445 -- # head -n 1 00:23:51.464 22:10:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:51.464 22:10:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:51.464 192.168.100.9' 00:23:51.464 22:10:02 -- nvmf/common.sh@446 -- # tail -n +2 00:23:51.464 22:10:02 -- nvmf/common.sh@446 -- # head -n 1 00:23:51.464 22:10:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:51.464 22:10:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:51.464 22:10:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:51.464 22:10:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:51.464 22:10:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:51.464 22:10:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:51.464 22:10:02 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:51.464 22:10:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:51.464 22:10:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:51.464 22:10:02 -- common/autotest_common.sh@10 -- # set +x 00:23:51.464 22:10:02 -- nvmf/common.sh@469 -- # nvmfpid=2271827 00:23:51.464 22:10:02 -- nvmf/common.sh@470 -- # waitforlisten 2271827 00:23:51.464 22:10:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.464 22:10:02 -- common/autotest_common.sh@819 -- # '[' -z 2271827 ']' 00:23:51.464 22:10:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.464 22:10:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:51.464 22:10:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.464 22:10:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:51.464 22:10:02 -- common/autotest_common.sh@10 -- # set +x 00:23:51.464 [2024-07-26 22:10:02.562321] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:51.464 [2024-07-26 22:10:02.562375] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.464 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.464 [2024-07-26 22:10:02.647260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.464 [2024-07-26 22:10:02.684877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:51.464 [2024-07-26 22:10:02.684995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.464 [2024-07-26 22:10:02.685005] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.464 [2024-07-26 22:10:02.685015] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.464 [2024-07-26 22:10:02.685143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.464 [2024-07-26 22:10:02.685168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.464 [2024-07-26 22:10:02.685206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.464 [2024-07-26 22:10:02.685207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:52.397 22:10:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:52.397 22:10:03 -- common/autotest_common.sh@852 -- # return 0 00:23:52.397 22:10:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:52.397 22:10:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:52.397 22:10:03 -- common/autotest_common.sh@10 -- # set +x 00:23:52.397 22:10:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.397 22:10:03 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:52.397 22:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.397 22:10:03 -- common/autotest_common.sh@10 -- # set +x 00:23:52.397 [2024-07-26 22:10:03.436302] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdbb7a0/0xdbfc90) succeed. 00:23:52.397 [2024-07-26 22:10:03.446818] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdbcd90/0xe01320) succeed. 00:23:52.397 22:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.397 22:10:03 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:52.397 22:10:03 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:52.397 22:10:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:52.397 22:10:03 -- common/autotest_common.sh@10 -- # set +x 00:23:52.397 22:10:03 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.397 22:10:03 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:52.397 22:10:03 -- target/shutdown.sh@28 -- # cat 00:23:52.654 22:10:03 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:52.654 22:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:52.654 22:10:03 -- common/autotest_common.sh@10 -- # set +x 00:23:52.654 Malloc1 00:23:52.654 [2024-07-26 22:10:03.673551] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:52.654 Malloc2 00:23:52.654 Malloc3 00:23:52.654 Malloc4 00:23:52.654 Malloc5 00:23:52.654 Malloc6 00:23:52.911 Malloc7 00:23:52.911 Malloc8 00:23:52.911 Malloc9 00:23:52.911 Malloc10 00:23:52.911 22:10:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:52.911 22:10:04 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:52.911 22:10:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:52.911 22:10:04 -- common/autotest_common.sh@10 -- # set +x 00:23:52.911 22:10:04 -- target/shutdown.sh@78 -- # perfpid=2272148 00:23:52.911 22:10:04 -- target/shutdown.sh@79 -- # waitforlisten 2272148 /var/tmp/bdevperf.sock 00:23:52.911 22:10:04 -- common/autotest_common.sh@819 -- # '[' -z 2272148 ']' 00:23:52.911 22:10:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:52.911 22:10:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:52.911 22:10:04 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:52.911 22:10:04 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:52.911 22:10:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:52.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:52.911 22:10:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:52.911 22:10:04 -- nvmf/common.sh@520 -- # config=() 00:23:52.911 22:10:04 -- common/autotest_common.sh@10 -- # set +x 00:23:52.911 22:10:04 -- nvmf/common.sh@520 -- # local subsystem config 00:23:52.911 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.911 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.911 { 00:23:52.911 "params": { 00:23:52.911 "name": "Nvme$subsystem", 00:23:52.911 "trtype": "$TEST_TRANSPORT", 00:23:52.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.911 "adrfam": "ipv4", 00:23:52.911 "trsvcid": "$NVMF_PORT", 00:23:52.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.911 "hdgst": ${hdgst:-false}, 00:23:52.911 "ddgst": ${ddgst:-false} 00:23:52.911 }, 00:23:52.911 "method": "bdev_nvme_attach_controller" 00:23:52.911 } 00:23:52.911 EOF 00:23:52.911 )") 00:23:52.911 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:52.911 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.911 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.911 { 00:23:52.911 "params": { 00:23:52.911 "name": "Nvme$subsystem", 00:23:52.911 "trtype": "$TEST_TRANSPORT", 00:23:52.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.911 "adrfam": "ipv4", 00:23:52.911 "trsvcid": "$NVMF_PORT", 00:23:52.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.911 "hdgst": ${hdgst:-false}, 00:23:52.911 "ddgst": ${ddgst:-false} 00:23:52.911 }, 00:23:52.911 "method": "bdev_nvme_attach_controller" 00:23:52.911 } 00:23:52.911 EOF 00:23:52.911 )") 00:23:52.911 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:52.911 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:52.911 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:52.911 { 00:23:52.911 "params": { 00:23:52.911 "name": "Nvme$subsystem", 00:23:52.911 "trtype": "$TEST_TRANSPORT", 00:23:52.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:52.911 "adrfam": "ipv4", 00:23:52.911 "trsvcid": "$NVMF_PORT", 00:23:52.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:52.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:52.911 "hdgst": ${hdgst:-false}, 00:23:52.911 "ddgst": ${ddgst:-false} 00:23:52.911 }, 00:23:52.911 "method": "bdev_nvme_attach_controller" 00:23:52.911 } 00:23:52.911 EOF 00:23:52.911 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 [2024-07-26 22:10:04.159587] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 [2024-07-26 22:10:04.159647] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:53.169 { 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme$subsystem", 00:23:53.169 "trtype": "$TEST_TRANSPORT", 00:23:53.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "$NVMF_PORT", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.169 "hdgst": ${hdgst:-false}, 00:23:53.169 "ddgst": ${ddgst:-false} 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 } 00:23:53.169 EOF 00:23:53.169 )") 00:23:53.169 22:10:04 -- nvmf/common.sh@542 -- # cat 00:23:53.169 22:10:04 -- nvmf/common.sh@544 -- # jq . 00:23:53.169 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.169 22:10:04 -- nvmf/common.sh@545 -- # IFS=, 00:23:53.169 22:10:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme1", 00:23:53.169 "trtype": "rdma", 00:23:53.169 "traddr": "192.168.100.8", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "4420", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.169 "hdgst": false, 00:23:53.169 "ddgst": false 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 },{ 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme2", 00:23:53.169 "trtype": "rdma", 00:23:53.169 "traddr": "192.168.100.8", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "4420", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.169 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.169 "hdgst": false, 00:23:53.169 "ddgst": false 00:23:53.169 }, 00:23:53.169 "method": "bdev_nvme_attach_controller" 00:23:53.169 },{ 00:23:53.169 "params": { 00:23:53.169 "name": "Nvme3", 00:23:53.169 "trtype": "rdma", 00:23:53.169 "traddr": "192.168.100.8", 00:23:53.169 "adrfam": "ipv4", 00:23:53.169 "trsvcid": "4420", 00:23:53.169 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme4", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme5", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme6", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme7", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme8", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme9", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 },{ 00:23:53.170 "params": { 00:23:53.170 "name": "Nvme10", 00:23:53.170 "trtype": "rdma", 00:23:53.170 "traddr": "192.168.100.8", 00:23:53.170 "adrfam": "ipv4", 00:23:53.170 "trsvcid": "4420", 00:23:53.170 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:53.170 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:53.170 "hdgst": false, 00:23:53.170 "ddgst": false 00:23:53.170 }, 00:23:53.170 "method": "bdev_nvme_attach_controller" 00:23:53.170 }' 00:23:53.170 [2024-07-26 22:10:04.246352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.170 [2024-07-26 22:10:04.282343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.542 22:10:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:54.542 22:10:05 -- common/autotest_common.sh@852 -- # return 0 00:23:54.542 22:10:05 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:54.542 22:10:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:54.542 22:10:05 -- common/autotest_common.sh@10 -- # set +x 00:23:54.542 22:10:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:54.542 22:10:05 -- target/shutdown.sh@83 -- # kill -9 2272148 00:23:54.542 22:10:05 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:54.542 22:10:05 -- target/shutdown.sh@87 -- # sleep 1 00:23:55.475 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2272148 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:55.475 22:10:06 -- target/shutdown.sh@88 -- # kill -0 2271827 00:23:55.475 22:10:06 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:55.475 22:10:06 -- nvmf/common.sh@520 -- # config=() 00:23:55.475 22:10:06 -- nvmf/common.sh@520 -- # local subsystem config 00:23:55.475 22:10:06 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:55.475 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.475 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.475 { 00:23:55.475 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 [2024-07-26 22:10:06.691422] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:55.476 [2024-07-26 22:10:06.691481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2272576 ] 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.476 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.476 { 00:23:55.476 "params": { 00:23:55.476 "name": "Nvme$subsystem", 00:23:55.476 "trtype": "$TEST_TRANSPORT", 00:23:55.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.476 "adrfam": "ipv4", 00:23:55.476 "trsvcid": "$NVMF_PORT", 00:23:55.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.476 "hdgst": ${hdgst:-false}, 00:23:55.476 "ddgst": ${ddgst:-false} 00:23:55.476 }, 00:23:55.476 "method": "bdev_nvme_attach_controller" 00:23:55.476 } 00:23:55.476 EOF 00:23:55.476 )") 00:23:55.476 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.735 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.735 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.735 { 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme$subsystem", 00:23:55.735 "trtype": "$TEST_TRANSPORT", 00:23:55.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "$NVMF_PORT", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.735 "hdgst": ${hdgst:-false}, 00:23:55.735 "ddgst": ${ddgst:-false} 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 } 00:23:55.735 EOF 00:23:55.735 )") 00:23:55.735 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.735 22:10:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:55.735 22:10:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:55.735 { 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme$subsystem", 00:23:55.735 "trtype": "$TEST_TRANSPORT", 00:23:55.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "$NVMF_PORT", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:55.735 "hdgst": ${hdgst:-false}, 00:23:55.735 "ddgst": ${ddgst:-false} 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 } 00:23:55.735 EOF 00:23:55.735 )") 00:23:55.735 22:10:06 -- nvmf/common.sh@542 -- # cat 00:23:55.735 22:10:06 -- nvmf/common.sh@544 -- # jq . 00:23:55.735 22:10:06 -- nvmf/common.sh@545 -- # IFS=, 00:23:55.735 22:10:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme1", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme2", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme3", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme4", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme5", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme6", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme7", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme8", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme9", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 },{ 00:23:55.735 "params": { 00:23:55.735 "name": "Nvme10", 00:23:55.735 "trtype": "rdma", 00:23:55.735 "traddr": "192.168.100.8", 00:23:55.735 "adrfam": "ipv4", 00:23:55.735 "trsvcid": "4420", 00:23:55.735 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:55.735 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:55.735 "hdgst": false, 00:23:55.735 "ddgst": false 00:23:55.735 }, 00:23:55.735 "method": "bdev_nvme_attach_controller" 00:23:55.735 }' 00:23:55.735 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.735 [2024-07-26 22:10:06.780876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.735 [2024-07-26 22:10:06.818154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.669 Running I/O for 1 seconds... 00:23:57.604 00:23:57.604 Latency(us) 00:23:57.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.604 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.604 Verification LBA range: start 0x0 length 0x400 00:23:57.604 Nvme1n1 : 1.10 734.29 45.89 0.00 0.00 86190.29 7392.46 119957.09 00:23:57.604 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.604 Verification LBA range: start 0x0 length 0x400 00:23:57.604 Nvme2n1 : 1.10 747.21 46.70 0.00 0.00 84058.90 7654.60 75497.47 00:23:57.604 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.604 Verification LBA range: start 0x0 length 0x400 00:23:57.604 Nvme3n1 : 1.11 746.53 46.66 0.00 0.00 83643.73 7864.32 74239.18 00:23:57.605 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme4n1 : 1.11 749.47 46.84 0.00 0.00 82835.13 8074.04 72142.03 00:23:57.605 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme5n1 : 1.11 745.19 46.57 0.00 0.00 82842.05 8283.75 70464.31 00:23:57.605 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme6n1 : 1.11 744.52 46.53 0.00 0.00 82428.15 8441.04 69625.45 00:23:57.605 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme7n1 : 1.11 743.84 46.49 0.00 0.00 82013.23 8703.18 71303.17 00:23:57.605 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme8n1 : 1.11 743.18 46.45 0.00 0.00 81602.57 8860.47 72980.89 00:23:57.605 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme9n1 : 1.11 742.51 46.41 0.00 0.00 81179.20 9070.18 75497.47 00:23:57.605 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:57.605 Verification LBA range: start 0x0 length 0x400 00:23:57.605 Nvme10n1 : 1.11 548.56 34.29 0.00 0.00 109060.84 7654.60 333866.60 00:23:57.605 =================================================================================================================== 00:23:57.605 Total : 7245.30 452.83 0.00 0.00 84953.89 7392.46 333866.60 00:23:57.863 22:10:09 -- target/shutdown.sh@93 -- # stoptarget 00:23:57.863 22:10:09 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:57.863 22:10:09 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:57.863 22:10:09 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.863 22:10:09 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:57.863 22:10:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:57.863 22:10:09 -- nvmf/common.sh@116 -- # sync 00:23:57.863 22:10:09 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:57.863 22:10:09 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:57.863 22:10:09 -- nvmf/common.sh@119 -- # set +e 00:23:57.863 22:10:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:57.863 22:10:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:57.863 rmmod nvme_rdma 00:23:58.121 rmmod nvme_fabrics 00:23:58.121 22:10:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:58.121 22:10:09 -- nvmf/common.sh@123 -- # set -e 00:23:58.121 22:10:09 -- nvmf/common.sh@124 -- # return 0 00:23:58.121 22:10:09 -- nvmf/common.sh@477 -- # '[' -n 2271827 ']' 00:23:58.121 22:10:09 -- nvmf/common.sh@478 -- # killprocess 2271827 00:23:58.121 22:10:09 -- common/autotest_common.sh@926 -- # '[' -z 2271827 ']' 00:23:58.121 22:10:09 -- common/autotest_common.sh@930 -- # kill -0 2271827 00:23:58.121 22:10:09 -- common/autotest_common.sh@931 -- # uname 00:23:58.121 22:10:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:58.121 22:10:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2271827 00:23:58.121 22:10:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:58.121 22:10:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:58.121 22:10:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2271827' 00:23:58.121 killing process with pid 2271827 00:23:58.121 22:10:09 -- common/autotest_common.sh@945 -- # kill 2271827 00:23:58.121 22:10:09 -- common/autotest_common.sh@950 -- # wait 2271827 00:23:58.688 22:10:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:58.688 22:10:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:58.688 00:23:58.688 real 0m15.528s 00:23:58.688 user 0m33.260s 00:23:58.688 sys 0m7.624s 00:23:58.688 22:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.688 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:58.688 ************************************ 00:23:58.688 END TEST nvmf_shutdown_tc1 00:23:58.688 ************************************ 00:23:58.688 22:10:09 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:58.688 22:10:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:58.688 22:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:58.688 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:58.689 ************************************ 00:23:58.689 START TEST nvmf_shutdown_tc2 00:23:58.689 ************************************ 00:23:58.689 22:10:09 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:23:58.689 22:10:09 -- target/shutdown.sh@98 -- # starttarget 00:23:58.689 22:10:09 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:58.689 22:10:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:58.689 22:10:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.689 22:10:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:58.689 22:10:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:58.689 22:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.689 22:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.689 22:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.689 22:10:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:58.689 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:58.689 22:10:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:58.689 22:10:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:58.689 22:10:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:58.689 22:10:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:58.689 22:10:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:58.689 22:10:09 -- nvmf/common.sh@294 -- # net_devs=() 00:23:58.689 22:10:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@295 -- # e810=() 00:23:58.689 22:10:09 -- nvmf/common.sh@295 -- # local -ga e810 00:23:58.689 22:10:09 -- nvmf/common.sh@296 -- # x722=() 00:23:58.689 22:10:09 -- nvmf/common.sh@296 -- # local -ga x722 00:23:58.689 22:10:09 -- nvmf/common.sh@297 -- # mlx=() 00:23:58.689 22:10:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:58.689 22:10:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.689 22:10:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:58.689 22:10:09 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:58.689 22:10:09 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:58.689 22:10:09 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:58.689 22:10:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:58.689 22:10:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:58.689 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:58.689 22:10:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.689 22:10:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:58.689 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:58.689 22:10:09 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.689 22:10:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:58.689 22:10:09 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.689 22:10:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.689 22:10:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.689 22:10:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:58.689 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:58.689 22:10:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.689 22:10:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.689 22:10:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.689 22:10:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.689 22:10:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:58.689 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:58.689 22:10:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.689 22:10:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:58.689 22:10:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:58.689 22:10:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:58.689 22:10:09 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:58.689 22:10:09 -- nvmf/common.sh@57 -- # uname 00:23:58.689 22:10:09 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:58.689 22:10:09 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:58.689 22:10:09 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:58.689 22:10:09 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:58.689 22:10:09 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:58.689 22:10:09 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:58.689 22:10:09 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:58.689 22:10:09 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:58.689 22:10:09 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:58.689 22:10:09 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:58.689 22:10:09 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:58.689 22:10:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:58.689 22:10:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:58.689 22:10:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.689 22:10:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:58.689 22:10:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:58.689 22:10:09 -- nvmf/common.sh@104 -- # continue 2 00:23:58.689 22:10:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.689 22:10:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:58.689 22:10:09 -- nvmf/common.sh@104 -- # continue 2 00:23:58.689 22:10:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:58.689 22:10:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:58.689 22:10:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:58.689 22:10:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:58.689 22:10:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.689 22:10:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.689 22:10:09 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:58.689 22:10:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:58.689 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.689 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:58.689 altname enp217s0f0np0 00:23:58.689 altname ens818f0np0 00:23:58.689 inet 192.168.100.8/24 scope global mlx_0_0 00:23:58.689 valid_lft forever preferred_lft forever 00:23:58.689 22:10:09 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:58.689 22:10:09 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:58.689 22:10:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:58.689 22:10:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:58.689 22:10:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.689 22:10:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.689 22:10:09 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:58.689 22:10:09 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:58.689 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.689 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:58.689 altname enp217s0f1np1 00:23:58.689 altname ens818f1np1 00:23:58.689 inet 192.168.100.9/24 scope global mlx_0_1 00:23:58.689 valid_lft forever preferred_lft forever 00:23:58.689 22:10:09 -- nvmf/common.sh@410 -- # return 0 00:23:58.689 22:10:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:58.689 22:10:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:58.689 22:10:09 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:58.689 22:10:09 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:58.689 22:10:09 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:58.690 22:10:09 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.690 22:10:09 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:58.690 22:10:09 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:58.690 22:10:09 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.690 22:10:09 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:58.690 22:10:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.690 22:10:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.690 22:10:09 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.690 22:10:09 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:58.690 22:10:09 -- nvmf/common.sh@104 -- # continue 2 00:23:58.690 22:10:09 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.690 22:10:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.690 22:10:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.690 22:10:09 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.690 22:10:09 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.690 22:10:09 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:58.690 22:10:09 -- nvmf/common.sh@104 -- # continue 2 00:23:58.690 22:10:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:58.690 22:10:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:58.690 22:10:09 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:58.690 22:10:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:58.690 22:10:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.690 22:10:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.690 22:10:09 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:58.690 22:10:09 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:58.690 22:10:09 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:58.690 22:10:09 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:58.690 22:10:09 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.690 22:10:09 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.690 22:10:09 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:58.690 192.168.100.9' 00:23:58.690 22:10:09 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:58.690 192.168.100.9' 00:23:58.690 22:10:09 -- nvmf/common.sh@445 -- # head -n 1 00:23:58.690 22:10:09 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:58.690 22:10:09 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:58.690 192.168.100.9' 00:23:58.690 22:10:09 -- nvmf/common.sh@446 -- # tail -n +2 00:23:58.690 22:10:09 -- nvmf/common.sh@446 -- # head -n 1 00:23:58.690 22:10:09 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:58.690 22:10:09 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:58.690 22:10:09 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:58.690 22:10:09 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:58.690 22:10:09 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:58.690 22:10:09 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:58.948 22:10:09 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:58.948 22:10:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:58.948 22:10:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:58.948 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:58.948 22:10:09 -- nvmf/common.sh@469 -- # nvmfpid=2273262 00:23:58.948 22:10:09 -- nvmf/common.sh@470 -- # waitforlisten 2273262 00:23:58.948 22:10:09 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:58.948 22:10:09 -- common/autotest_common.sh@819 -- # '[' -z 2273262 ']' 00:23:58.948 22:10:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.949 22:10:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.949 22:10:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.949 22:10:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.949 22:10:09 -- common/autotest_common.sh@10 -- # set +x 00:23:58.949 [2024-07-26 22:10:09.987039] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:58.949 [2024-07-26 22:10:09.987090] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.949 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.949 [2024-07-26 22:10:10.076807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.949 [2024-07-26 22:10:10.114080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.949 [2024-07-26 22:10:10.114213] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.949 [2024-07-26 22:10:10.114222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.949 [2024-07-26 22:10:10.114231] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.949 [2024-07-26 22:10:10.114351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.949 [2024-07-26 22:10:10.114440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.949 [2024-07-26 22:10:10.114550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.949 [2024-07-26 22:10:10.114552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:59.915 22:10:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:59.915 22:10:10 -- common/autotest_common.sh@852 -- # return 0 00:23:59.915 22:10:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:59.915 22:10:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:59.915 22:10:10 -- common/autotest_common.sh@10 -- # set +x 00:23:59.915 22:10:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.915 22:10:10 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:59.915 22:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.915 22:10:10 -- common/autotest_common.sh@10 -- # set +x 00:23:59.915 [2024-07-26 22:10:10.846111] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22437a0/0x2247c90) succeed. 00:23:59.915 [2024-07-26 22:10:10.856561] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2244d90/0x2289320) succeed. 00:23:59.915 22:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:59.915 22:10:10 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:59.915 22:10:10 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:59.915 22:10:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:59.915 22:10:10 -- common/autotest_common.sh@10 -- # set +x 00:23:59.915 22:10:10 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.915 22:10:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:10 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:10 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:10 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:10 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.915 22:10:11 -- target/shutdown.sh@28 -- # cat 00:23:59.915 22:10:11 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:59.915 22:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.915 22:10:11 -- common/autotest_common.sh@10 -- # set +x 00:23:59.915 Malloc1 00:23:59.915 [2024-07-26 22:10:11.080724] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:59.915 Malloc2 00:24:00.173 Malloc3 00:24:00.173 Malloc4 00:24:00.173 Malloc5 00:24:00.173 Malloc6 00:24:00.173 Malloc7 00:24:00.173 Malloc8 00:24:00.431 Malloc9 00:24:00.431 Malloc10 00:24:00.431 22:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:00.431 22:10:11 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:00.431 22:10:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:00.431 22:10:11 -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 22:10:11 -- target/shutdown.sh@102 -- # perfpid=2273584 00:24:00.431 22:10:11 -- target/shutdown.sh@103 -- # waitforlisten 2273584 /var/tmp/bdevperf.sock 00:24:00.431 22:10:11 -- common/autotest_common.sh@819 -- # '[' -z 2273584 ']' 00:24:00.431 22:10:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.431 22:10:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:00.431 22:10:11 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:00.431 22:10:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.431 22:10:11 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:00.431 22:10:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:00.431 22:10:11 -- common/autotest_common.sh@10 -- # set +x 00:24:00.431 22:10:11 -- nvmf/common.sh@520 -- # config=() 00:24:00.431 22:10:11 -- nvmf/common.sh@520 -- # local subsystem config 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.431 } 00:24:00.431 EOF 00:24:00.431 )") 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.431 } 00:24:00.431 EOF 00:24:00.431 )") 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.431 } 00:24:00.431 EOF 00:24:00.431 )") 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.431 } 00:24:00.431 EOF 00:24:00.431 )") 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.431 } 00:24:00.431 EOF 00:24:00.431 )") 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 [2024-07-26 22:10:11.568343] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:00.431 [2024-07-26 22:10:11.568400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2273584 ] 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.431 } 00:24:00.431 EOF 00:24:00.431 )") 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.431 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.431 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.431 { 00:24:00.431 "params": { 00:24:00.431 "name": "Nvme$subsystem", 00:24:00.431 "trtype": "$TEST_TRANSPORT", 00:24:00.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.431 "adrfam": "ipv4", 00:24:00.431 "trsvcid": "$NVMF_PORT", 00:24:00.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.431 "hdgst": ${hdgst:-false}, 00:24:00.431 "ddgst": ${ddgst:-false} 00:24:00.431 }, 00:24:00.431 "method": "bdev_nvme_attach_controller" 00:24:00.432 } 00:24:00.432 EOF 00:24:00.432 )") 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.432 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.432 { 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme$subsystem", 00:24:00.432 "trtype": "$TEST_TRANSPORT", 00:24:00.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "$NVMF_PORT", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.432 "hdgst": ${hdgst:-false}, 00:24:00.432 "ddgst": ${ddgst:-false} 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 } 00:24:00.432 EOF 00:24:00.432 )") 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.432 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.432 { 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme$subsystem", 00:24:00.432 "trtype": "$TEST_TRANSPORT", 00:24:00.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "$NVMF_PORT", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.432 "hdgst": ${hdgst:-false}, 00:24:00.432 "ddgst": ${ddgst:-false} 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 } 00:24:00.432 EOF 00:24:00.432 )") 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.432 22:10:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:00.432 { 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme$subsystem", 00:24:00.432 "trtype": "$TEST_TRANSPORT", 00:24:00.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "$NVMF_PORT", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.432 "hdgst": ${hdgst:-false}, 00:24:00.432 "ddgst": ${ddgst:-false} 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 } 00:24:00.432 EOF 00:24:00.432 )") 00:24:00.432 22:10:11 -- nvmf/common.sh@542 -- # cat 00:24:00.432 22:10:11 -- nvmf/common.sh@544 -- # jq . 00:24:00.432 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.432 22:10:11 -- nvmf/common.sh@545 -- # IFS=, 00:24:00.432 22:10:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme1", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme2", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme3", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme4", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme5", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme6", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme7", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme8", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme9", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 },{ 00:24:00.432 "params": { 00:24:00.432 "name": "Nvme10", 00:24:00.432 "trtype": "rdma", 00:24:00.432 "traddr": "192.168.100.8", 00:24:00.432 "adrfam": "ipv4", 00:24:00.432 "trsvcid": "4420", 00:24:00.432 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:00.432 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:00.432 "hdgst": false, 00:24:00.432 "ddgst": false 00:24:00.432 }, 00:24:00.432 "method": "bdev_nvme_attach_controller" 00:24:00.432 }' 00:24:00.432 [2024-07-26 22:10:11.655665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.690 [2024-07-26 22:10:11.692183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.622 Running I/O for 10 seconds... 00:24:02.187 22:10:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:02.187 22:10:13 -- common/autotest_common.sh@852 -- # return 0 00:24:02.187 22:10:13 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:02.187 22:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.187 22:10:13 -- common/autotest_common.sh@10 -- # set +x 00:24:02.187 22:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.187 22:10:13 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:02.187 22:10:13 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:02.187 22:10:13 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:02.187 22:10:13 -- target/shutdown.sh@57 -- # local ret=1 00:24:02.187 22:10:13 -- target/shutdown.sh@58 -- # local i 00:24:02.187 22:10:13 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:02.187 22:10:13 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:02.187 22:10:13 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.187 22:10:13 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.187 22:10:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:02.187 22:10:13 -- common/autotest_common.sh@10 -- # set +x 00:24:02.187 22:10:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:02.187 22:10:13 -- target/shutdown.sh@60 -- # read_io_count=461 00:24:02.187 22:10:13 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:24:02.187 22:10:13 -- target/shutdown.sh@64 -- # ret=0 00:24:02.187 22:10:13 -- target/shutdown.sh@65 -- # break 00:24:02.187 22:10:13 -- target/shutdown.sh@69 -- # return 0 00:24:02.187 22:10:13 -- target/shutdown.sh@109 -- # killprocess 2273584 00:24:02.187 22:10:13 -- common/autotest_common.sh@926 -- # '[' -z 2273584 ']' 00:24:02.187 22:10:13 -- common/autotest_common.sh@930 -- # kill -0 2273584 00:24:02.187 22:10:13 -- common/autotest_common.sh@931 -- # uname 00:24:02.187 22:10:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:02.187 22:10:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2273584 00:24:02.187 22:10:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:02.187 22:10:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:02.187 22:10:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2273584' 00:24:02.187 killing process with pid 2273584 00:24:02.187 22:10:13 -- common/autotest_common.sh@945 -- # kill 2273584 00:24:02.187 22:10:13 -- common/autotest_common.sh@950 -- # wait 2273584 00:24:02.444 Received shutdown signal, test time was about 0.929761 seconds 00:24:02.444 00:24:02.444 Latency(us) 00:24:02.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.444 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme1n1 : 0.92 713.00 44.56 0.00 0.00 88632.48 7497.32 107374.18 00:24:02.444 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme2n1 : 0.92 712.18 44.51 0.00 0.00 87935.89 7759.46 105277.03 00:24:02.444 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme3n1 : 0.92 717.87 44.87 0.00 0.00 86607.39 8126.46 101921.59 00:24:02.444 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme4n1 : 0.92 737.63 46.10 0.00 0.00 83701.12 8388.61 96468.99 00:24:02.444 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme5n1 : 0.92 742.27 46.39 0.00 0.00 82538.18 8545.89 93113.55 00:24:02.444 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme6n1 : 0.93 748.00 46.75 0.00 0.00 81333.28 8650.75 75078.04 00:24:02.444 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme7n1 : 0.93 747.23 46.70 0.00 0.00 80845.59 8755.61 73400.32 00:24:02.444 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme8n1 : 0.93 746.46 46.65 0.00 0.00 80337.14 8912.90 72142.03 00:24:02.444 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme9n1 : 0.93 745.69 46.61 0.00 0.00 79842.85 9017.75 71303.17 00:24:02.444 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.444 Verification LBA range: start 0x0 length 0x400 00:24:02.444 Nvme10n1 : 0.93 519.93 32.50 0.00 0.00 113622.92 7707.03 315411.66 00:24:02.444 =================================================================================================================== 00:24:02.444 Total : 7130.27 445.64 0.00 0.00 85670.02 7497.32 315411.66 00:24:02.702 22:10:13 -- target/shutdown.sh@112 -- # sleep 1 00:24:03.634 22:10:14 -- target/shutdown.sh@113 -- # kill -0 2273262 00:24:03.634 22:10:14 -- target/shutdown.sh@115 -- # stoptarget 00:24:03.634 22:10:14 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:03.634 22:10:14 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:03.634 22:10:14 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:03.634 22:10:14 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:03.634 22:10:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:03.634 22:10:14 -- nvmf/common.sh@116 -- # sync 00:24:03.634 22:10:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:03.634 22:10:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:03.634 22:10:14 -- nvmf/common.sh@119 -- # set +e 00:24:03.634 22:10:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:03.634 22:10:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:03.634 rmmod nvme_rdma 00:24:03.634 rmmod nvme_fabrics 00:24:03.634 22:10:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:03.634 22:10:14 -- nvmf/common.sh@123 -- # set -e 00:24:03.634 22:10:14 -- nvmf/common.sh@124 -- # return 0 00:24:03.634 22:10:14 -- nvmf/common.sh@477 -- # '[' -n 2273262 ']' 00:24:03.634 22:10:14 -- nvmf/common.sh@478 -- # killprocess 2273262 00:24:03.634 22:10:14 -- common/autotest_common.sh@926 -- # '[' -z 2273262 ']' 00:24:03.634 22:10:14 -- common/autotest_common.sh@930 -- # kill -0 2273262 00:24:03.634 22:10:14 -- common/autotest_common.sh@931 -- # uname 00:24:03.634 22:10:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.634 22:10:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2273262 00:24:03.634 22:10:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:03.892 22:10:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:03.892 22:10:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2273262' 00:24:03.892 killing process with pid 2273262 00:24:03.892 22:10:14 -- common/autotest_common.sh@945 -- # kill 2273262 00:24:03.892 22:10:14 -- common/autotest_common.sh@950 -- # wait 2273262 00:24:04.150 22:10:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:04.150 22:10:15 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:04.150 00:24:04.150 real 0m5.638s 00:24:04.150 user 0m22.775s 00:24:04.150 sys 0m1.243s 00:24:04.150 22:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.150 22:10:15 -- common/autotest_common.sh@10 -- # set +x 00:24:04.150 ************************************ 00:24:04.150 END TEST nvmf_shutdown_tc2 00:24:04.150 ************************************ 00:24:04.150 22:10:15 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:04.150 22:10:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:04.150 22:10:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.150 22:10:15 -- common/autotest_common.sh@10 -- # set +x 00:24:04.150 ************************************ 00:24:04.150 START TEST nvmf_shutdown_tc3 00:24:04.150 ************************************ 00:24:04.150 22:10:15 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:24:04.150 22:10:15 -- target/shutdown.sh@120 -- # starttarget 00:24:04.150 22:10:15 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:04.150 22:10:15 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:04.150 22:10:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.150 22:10:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:04.150 22:10:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:04.150 22:10:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:04.150 22:10:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.150 22:10:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.151 22:10:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.151 22:10:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:04.151 22:10:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:04.151 22:10:15 -- common/autotest_common.sh@10 -- # set +x 00:24:04.151 22:10:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:04.151 22:10:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:04.151 22:10:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:04.151 22:10:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:04.151 22:10:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:04.151 22:10:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:04.151 22:10:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:04.151 22:10:15 -- nvmf/common.sh@294 -- # net_devs=() 00:24:04.151 22:10:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:04.151 22:10:15 -- nvmf/common.sh@295 -- # e810=() 00:24:04.151 22:10:15 -- nvmf/common.sh@295 -- # local -ga e810 00:24:04.151 22:10:15 -- nvmf/common.sh@296 -- # x722=() 00:24:04.151 22:10:15 -- nvmf/common.sh@296 -- # local -ga x722 00:24:04.151 22:10:15 -- nvmf/common.sh@297 -- # mlx=() 00:24:04.151 22:10:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:04.151 22:10:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.151 22:10:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:04.151 22:10:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:04.151 22:10:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:04.151 22:10:15 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:04.151 22:10:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:04.151 22:10:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:04.151 22:10:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:04.151 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:04.151 22:10:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:04.151 22:10:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:04.151 22:10:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:04.151 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:04.151 22:10:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:04.151 22:10:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:04.151 22:10:15 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:04.151 22:10:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.151 22:10:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:04.151 22:10:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.151 22:10:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:04.151 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:04.151 22:10:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.151 22:10:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:04.151 22:10:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.151 22:10:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:04.151 22:10:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.151 22:10:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:04.151 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:04.151 22:10:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.151 22:10:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:04.151 22:10:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:04.151 22:10:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:04.151 22:10:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:04.151 22:10:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:04.151 22:10:15 -- nvmf/common.sh@57 -- # uname 00:24:04.151 22:10:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:04.151 22:10:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:04.410 22:10:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:04.410 22:10:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:04.410 22:10:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:04.410 22:10:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:04.410 22:10:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:04.410 22:10:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:04.410 22:10:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:04.410 22:10:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:04.410 22:10:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:04.410 22:10:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:04.410 22:10:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:04.410 22:10:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:04.410 22:10:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:04.410 22:10:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:04.410 22:10:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.410 22:10:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.410 22:10:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:04.410 22:10:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:04.410 22:10:15 -- nvmf/common.sh@104 -- # continue 2 00:24:04.410 22:10:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.410 22:10:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.410 22:10:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:04.410 22:10:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.410 22:10:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:04.410 22:10:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:04.410 22:10:15 -- nvmf/common.sh@104 -- # continue 2 00:24:04.410 22:10:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:04.410 22:10:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:04.410 22:10:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:04.410 22:10:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:04.410 22:10:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.410 22:10:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.410 22:10:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:04.410 22:10:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:04.410 22:10:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:04.410 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:04.410 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:04.410 altname enp217s0f0np0 00:24:04.410 altname ens818f0np0 00:24:04.410 inet 192.168.100.8/24 scope global mlx_0_0 00:24:04.410 valid_lft forever preferred_lft forever 00:24:04.410 22:10:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:04.410 22:10:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:04.410 22:10:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:04.410 22:10:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:04.410 22:10:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.410 22:10:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.410 22:10:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:04.410 22:10:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:04.410 22:10:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:04.410 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:04.410 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:04.410 altname enp217s0f1np1 00:24:04.410 altname ens818f1np1 00:24:04.410 inet 192.168.100.9/24 scope global mlx_0_1 00:24:04.410 valid_lft forever preferred_lft forever 00:24:04.410 22:10:15 -- nvmf/common.sh@410 -- # return 0 00:24:04.410 22:10:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:04.410 22:10:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:04.410 22:10:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:04.410 22:10:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:04.410 22:10:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:04.410 22:10:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:04.410 22:10:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:04.410 22:10:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:04.410 22:10:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:04.410 22:10:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:04.410 22:10:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.411 22:10:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.411 22:10:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:04.411 22:10:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:04.411 22:10:15 -- nvmf/common.sh@104 -- # continue 2 00:24:04.411 22:10:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:04.411 22:10:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.411 22:10:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:04.411 22:10:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:04.411 22:10:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:04.411 22:10:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:04.411 22:10:15 -- nvmf/common.sh@104 -- # continue 2 00:24:04.411 22:10:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:04.411 22:10:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:04.411 22:10:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:04.411 22:10:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:04.411 22:10:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.411 22:10:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.411 22:10:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:04.411 22:10:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:04.411 22:10:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:04.411 22:10:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:04.411 22:10:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:04.411 22:10:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:04.411 22:10:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:04.411 192.168.100.9' 00:24:04.411 22:10:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:04.411 192.168.100.9' 00:24:04.411 22:10:15 -- nvmf/common.sh@445 -- # head -n 1 00:24:04.411 22:10:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:04.411 22:10:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:04.411 192.168.100.9' 00:24:04.411 22:10:15 -- nvmf/common.sh@446 -- # tail -n +2 00:24:04.411 22:10:15 -- nvmf/common.sh@446 -- # head -n 1 00:24:04.411 22:10:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:04.411 22:10:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:04.411 22:10:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:04.411 22:10:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:04.411 22:10:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:04.411 22:10:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:04.411 22:10:15 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:04.411 22:10:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:04.411 22:10:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:04.411 22:10:15 -- common/autotest_common.sh@10 -- # set +x 00:24:04.411 22:10:15 -- nvmf/common.sh@469 -- # nvmfpid=2274346 00:24:04.411 22:10:15 -- nvmf/common.sh@470 -- # waitforlisten 2274346 00:24:04.411 22:10:15 -- common/autotest_common.sh@819 -- # '[' -z 2274346 ']' 00:24:04.411 22:10:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.411 22:10:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:04.411 22:10:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.411 22:10:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:04.411 22:10:15 -- common/autotest_common.sh@10 -- # set +x 00:24:04.411 22:10:15 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:04.669 [2024-07-26 22:10:15.638688] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:04.669 [2024-07-26 22:10:15.638738] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.669 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.669 [2024-07-26 22:10:15.724856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.669 [2024-07-26 22:10:15.762961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:04.669 [2024-07-26 22:10:15.763083] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.669 [2024-07-26 22:10:15.763092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.669 [2024-07-26 22:10:15.763102] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.669 [2024-07-26 22:10:15.763140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.669 [2024-07-26 22:10:15.763231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.669 [2024-07-26 22:10:15.763339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.669 [2024-07-26 22:10:15.763340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:05.235 22:10:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:05.235 22:10:16 -- common/autotest_common.sh@852 -- # return 0 00:24:05.235 22:10:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:05.235 22:10:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:05.235 22:10:16 -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 22:10:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.493 22:10:16 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:05.493 22:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:05.493 22:10:16 -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 [2024-07-26 22:10:16.520833] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x68a7a0/0x68ec90) succeed. 00:24:05.493 [2024-07-26 22:10:16.531046] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x68bd90/0x6d0320) succeed. 00:24:05.493 22:10:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:05.493 22:10:16 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:05.493 22:10:16 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:05.493 22:10:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:05.493 22:10:16 -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 22:10:16 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:05.493 22:10:16 -- target/shutdown.sh@28 -- # cat 00:24:05.493 22:10:16 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:05.493 22:10:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:05.493 22:10:16 -- common/autotest_common.sh@10 -- # set +x 00:24:05.751 Malloc1 00:24:05.751 [2024-07-26 22:10:16.753511] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:05.751 Malloc2 00:24:05.751 Malloc3 00:24:05.751 Malloc4 00:24:05.751 Malloc5 00:24:05.751 Malloc6 00:24:06.009 Malloc7 00:24:06.009 Malloc8 00:24:06.009 Malloc9 00:24:06.009 Malloc10 00:24:06.009 22:10:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.009 22:10:17 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:06.009 22:10:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:06.009 22:10:17 -- common/autotest_common.sh@10 -- # set +x 00:24:06.009 22:10:17 -- target/shutdown.sh@124 -- # perfpid=2274674 00:24:06.009 22:10:17 -- target/shutdown.sh@125 -- # waitforlisten 2274674 /var/tmp/bdevperf.sock 00:24:06.009 22:10:17 -- common/autotest_common.sh@819 -- # '[' -z 2274674 ']' 00:24:06.009 22:10:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.009 22:10:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:06.009 22:10:17 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:06.009 22:10:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.009 22:10:17 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:06.009 22:10:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:06.009 22:10:17 -- nvmf/common.sh@520 -- # config=() 00:24:06.010 22:10:17 -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 22:10:17 -- nvmf/common.sh@520 -- # local subsystem config 00:24:06.010 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.010 { 00:24:06.010 "params": { 00:24:06.010 "name": "Nvme$subsystem", 00:24:06.010 "trtype": "$TEST_TRANSPORT", 00:24:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.010 "adrfam": "ipv4", 00:24:06.010 "trsvcid": "$NVMF_PORT", 00:24:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.010 "hdgst": ${hdgst:-false}, 00:24:06.010 "ddgst": ${ddgst:-false} 00:24:06.010 }, 00:24:06.010 "method": "bdev_nvme_attach_controller" 00:24:06.010 } 00:24:06.010 EOF 00:24:06.010 )") 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.010 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.010 { 00:24:06.010 "params": { 00:24:06.010 "name": "Nvme$subsystem", 00:24:06.010 "trtype": "$TEST_TRANSPORT", 00:24:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.010 "adrfam": "ipv4", 00:24:06.010 "trsvcid": "$NVMF_PORT", 00:24:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.010 "hdgst": ${hdgst:-false}, 00:24:06.010 "ddgst": ${ddgst:-false} 00:24:06.010 }, 00:24:06.010 "method": "bdev_nvme_attach_controller" 00:24:06.010 } 00:24:06.010 EOF 00:24:06.010 )") 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.010 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.010 { 00:24:06.010 "params": { 00:24:06.010 "name": "Nvme$subsystem", 00:24:06.010 "trtype": "$TEST_TRANSPORT", 00:24:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.010 "adrfam": "ipv4", 00:24:06.010 "trsvcid": "$NVMF_PORT", 00:24:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.010 "hdgst": ${hdgst:-false}, 00:24:06.010 "ddgst": ${ddgst:-false} 00:24:06.010 }, 00:24:06.010 "method": "bdev_nvme_attach_controller" 00:24:06.010 } 00:24:06.010 EOF 00:24:06.010 )") 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.010 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.010 { 00:24:06.010 "params": { 00:24:06.010 "name": "Nvme$subsystem", 00:24:06.010 "trtype": "$TEST_TRANSPORT", 00:24:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.010 "adrfam": "ipv4", 00:24:06.010 "trsvcid": "$NVMF_PORT", 00:24:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.010 "hdgst": ${hdgst:-false}, 00:24:06.010 "ddgst": ${ddgst:-false} 00:24:06.010 }, 00:24:06.010 "method": "bdev_nvme_attach_controller" 00:24:06.010 } 00:24:06.010 EOF 00:24:06.010 )") 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.010 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.010 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.010 { 00:24:06.010 "params": { 00:24:06.010 "name": "Nvme$subsystem", 00:24:06.010 "trtype": "$TEST_TRANSPORT", 00:24:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.010 "adrfam": "ipv4", 00:24:06.010 "trsvcid": "$NVMF_PORT", 00:24:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.010 "hdgst": ${hdgst:-false}, 00:24:06.010 "ddgst": ${ddgst:-false} 00:24:06.010 }, 00:24:06.010 "method": "bdev_nvme_attach_controller" 00:24:06.010 } 00:24:06.010 EOF 00:24:06.010 )") 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.268 [2024-07-26 22:10:17.238759] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:06.268 [2024-07-26 22:10:17.238814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274674 ] 00:24:06.268 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.268 { 00:24:06.268 "params": { 00:24:06.268 "name": "Nvme$subsystem", 00:24:06.268 "trtype": "$TEST_TRANSPORT", 00:24:06.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.268 "adrfam": "ipv4", 00:24:06.268 "trsvcid": "$NVMF_PORT", 00:24:06.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.268 "hdgst": ${hdgst:-false}, 00:24:06.268 "ddgst": ${ddgst:-false} 00:24:06.268 }, 00:24:06.268 "method": "bdev_nvme_attach_controller" 00:24:06.268 } 00:24:06.268 EOF 00:24:06.268 )") 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.268 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.268 { 00:24:06.268 "params": { 00:24:06.268 "name": "Nvme$subsystem", 00:24:06.268 "trtype": "$TEST_TRANSPORT", 00:24:06.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.268 "adrfam": "ipv4", 00:24:06.268 "trsvcid": "$NVMF_PORT", 00:24:06.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.268 "hdgst": ${hdgst:-false}, 00:24:06.268 "ddgst": ${ddgst:-false} 00:24:06.268 }, 00:24:06.268 "method": "bdev_nvme_attach_controller" 00:24:06.268 } 00:24:06.268 EOF 00:24:06.268 )") 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.268 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.268 { 00:24:06.268 "params": { 00:24:06.268 "name": "Nvme$subsystem", 00:24:06.268 "trtype": "$TEST_TRANSPORT", 00:24:06.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.268 "adrfam": "ipv4", 00:24:06.268 "trsvcid": "$NVMF_PORT", 00:24:06.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.268 "hdgst": ${hdgst:-false}, 00:24:06.268 "ddgst": ${ddgst:-false} 00:24:06.268 }, 00:24:06.268 "method": "bdev_nvme_attach_controller" 00:24:06.268 } 00:24:06.268 EOF 00:24:06.268 )") 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.268 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.268 { 00:24:06.268 "params": { 00:24:06.268 "name": "Nvme$subsystem", 00:24:06.268 "trtype": "$TEST_TRANSPORT", 00:24:06.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.268 "adrfam": "ipv4", 00:24:06.268 "trsvcid": "$NVMF_PORT", 00:24:06.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.268 "hdgst": ${hdgst:-false}, 00:24:06.268 "ddgst": ${ddgst:-false} 00:24:06.268 }, 00:24:06.268 "method": "bdev_nvme_attach_controller" 00:24:06.268 } 00:24:06.268 EOF 00:24:06.268 )") 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.268 22:10:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.268 22:10:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.268 { 00:24:06.268 "params": { 00:24:06.268 "name": "Nvme$subsystem", 00:24:06.268 "trtype": "$TEST_TRANSPORT", 00:24:06.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.268 "adrfam": "ipv4", 00:24:06.268 "trsvcid": "$NVMF_PORT", 00:24:06.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.268 "hdgst": ${hdgst:-false}, 00:24:06.269 "ddgst": ${ddgst:-false} 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 } 00:24:06.269 EOF 00:24:06.269 )") 00:24:06.269 22:10:17 -- nvmf/common.sh@542 -- # cat 00:24:06.269 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.269 22:10:17 -- nvmf/common.sh@544 -- # jq . 00:24:06.269 22:10:17 -- nvmf/common.sh@545 -- # IFS=, 00:24:06.269 22:10:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme1", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme2", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme3", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme4", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme5", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme6", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme7", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme8", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme9", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 },{ 00:24:06.269 "params": { 00:24:06.269 "name": "Nvme10", 00:24:06.269 "trtype": "rdma", 00:24:06.269 "traddr": "192.168.100.8", 00:24:06.269 "adrfam": "ipv4", 00:24:06.269 "trsvcid": "4420", 00:24:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:06.269 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:06.269 "hdgst": false, 00:24:06.269 "ddgst": false 00:24:06.269 }, 00:24:06.269 "method": "bdev_nvme_attach_controller" 00:24:06.269 }' 00:24:06.269 [2024-07-26 22:10:17.328057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.269 [2024-07-26 22:10:17.364219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.203 Running I/O for 10 seconds... 00:24:07.770 22:10:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:07.770 22:10:18 -- common/autotest_common.sh@852 -- # return 0 00:24:07.770 22:10:18 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:07.770 22:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.770 22:10:18 -- common/autotest_common.sh@10 -- # set +x 00:24:07.770 22:10:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.770 22:10:18 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:07.770 22:10:18 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:07.770 22:10:18 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:07.770 22:10:18 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:07.770 22:10:18 -- target/shutdown.sh@57 -- # local ret=1 00:24:07.770 22:10:18 -- target/shutdown.sh@58 -- # local i 00:24:07.770 22:10:18 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:07.770 22:10:18 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:07.770 22:10:18 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:07.770 22:10:18 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:07.770 22:10:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.770 22:10:18 -- common/autotest_common.sh@10 -- # set +x 00:24:08.028 22:10:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:08.028 22:10:19 -- target/shutdown.sh@60 -- # read_io_count=461 00:24:08.028 22:10:19 -- target/shutdown.sh@63 -- # '[' 461 -ge 100 ']' 00:24:08.028 22:10:19 -- target/shutdown.sh@64 -- # ret=0 00:24:08.028 22:10:19 -- target/shutdown.sh@65 -- # break 00:24:08.028 22:10:19 -- target/shutdown.sh@69 -- # return 0 00:24:08.028 22:10:19 -- target/shutdown.sh@134 -- # killprocess 2274346 00:24:08.028 22:10:19 -- common/autotest_common.sh@926 -- # '[' -z 2274346 ']' 00:24:08.028 22:10:19 -- common/autotest_common.sh@930 -- # kill -0 2274346 00:24:08.028 22:10:19 -- common/autotest_common.sh@931 -- # uname 00:24:08.028 22:10:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:08.028 22:10:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2274346 00:24:08.028 22:10:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:08.028 22:10:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:08.028 22:10:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2274346' 00:24:08.028 killing process with pid 2274346 00:24:08.028 22:10:19 -- common/autotest_common.sh@945 -- # kill 2274346 00:24:08.029 22:10:19 -- common/autotest_common.sh@950 -- # wait 2274346 00:24:08.595 22:10:19 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:08.595 22:10:19 -- target/shutdown.sh@138 -- # sleep 1 00:24:09.174 [2024-07-26 22:10:20.141659] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257880 was disconnected and freed. reset controller. 00:24:09.174 [2024-07-26 22:10:20.141773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000009ef380 len:0x10000 key:0x183700 00:24:09.174 [2024-07-26 22:10:20.141816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.141868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000719fd00 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.141902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002c7740 len:0x10000 key:0x183d00 00:24:09.174 [2024-07-26 22:10:20.141972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x183d00 00:24:09.174 [2024-07-26 22:10:20.142177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000710f880 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000711f900 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070cf680 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000207140 len:0x10000 key:0x183d00 00:24:09.174 [2024-07-26 22:10:20.142282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000715fb00 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000705f300 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x184300 00:24:09.174 [2024-07-26 22:10:20.142343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x184300 00:24:09.174 [2024-07-26 22:10:20.142362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b21f700 len:0x10000 key:0x184300 00:24:09.174 [2024-07-26 22:10:20.142382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000718fc80 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x183400 00:24:09.174 [2024-07-26 22:10:20.142421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070df700 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x183400 00:24:09.174 [2024-07-26 22:10:20.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2afb80 len:0x10000 key:0x184300 00:24:09.174 [2024-07-26 22:10:20.142499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070ff800 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x184300 00:24:09.174 [2024-07-26 22:10:20.142539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000009df300 len:0x10000 key:0x183700 00:24:09.174 [2024-07-26 22:10:20.142558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000709f500 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183b00 00:24:09.174 [2024-07-26 22:10:20.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x184300 00:24:09.174 [2024-07-26 22:10:20.142617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.174 [2024-07-26 22:10:20.142631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183d00 00:24:09.175 [2024-07-26 22:10:20.142640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000703f200 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000287540 len:0x10000 key:0x183d00 00:24:09.175 [2024-07-26 22:10:20.142696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2bfc00 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.142715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183d00 00:24:09.175 [2024-07-26 22:10:20.142735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e9c0 len:0x10000 key:0x183400 00:24:09.175 [2024-07-26 22:10:20.142776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.142838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071bfe00 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071cfe80 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x183d00 00:24:09.175 [2024-07-26 22:10:20.142899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071afd80 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x183b00 00:24:09.175 [2024-07-26 22:10:20.142959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.142979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.142989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002b76c0 len:0x10000 key:0x183d00 00:24:09.175 [2024-07-26 22:10:20.142998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002372c0 len:0x10000 key:0x183d00 00:24:09.175 [2024-07-26 22:10:20.143020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011409000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001142a000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f23000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f02000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:73216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ee1000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ec0000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001254f000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed1b000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011742000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011721000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011700000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ef000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ce000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b529000 len:0x10000 key:0x184300 00:24:09.175 [2024-07-26 22:10:20.143362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.175 [2024-07-26 22:10:20.143373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b508000 len:0x10000 key:0x184300 00:24:09.176 [2024-07-26 22:10:20.143382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145676] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257640 was disconnected and freed. reset controller. 00:24:09.176 [2024-07-26 22:10:20.145697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008aef80 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.145707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000044f100 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.145778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.145798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.145859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.145899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000049f380 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.145939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008cf080 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.145960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000043f080 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.145980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.145991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001951f980 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.146002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.146022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000056fa00 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000081eb00 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.146063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000047f280 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004cf500 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.146144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195afe00 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.146205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000041ef80 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001956fc00 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.146288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:24:09.176 [2024-07-26 22:10:20.146331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000089ef00 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.146351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.146372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183700 00:24:09.176 [2024-07-26 22:10:20.146392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.176 [2024-07-26 22:10:20.146404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x183800 00:24:09.176 [2024-07-26 22:10:20.146415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:24:09.177 [2024-07-26 22:10:20.146436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000084ec80 len:0x10000 key:0x183700 00:24:09.177 [2024-07-26 22:10:20.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183700 00:24:09.177 [2024-07-26 22:10:20.146478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005efe00 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004df580 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005bfc80 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:24:09.177 [2024-07-26 22:10:20.146579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ef800 len:0x10000 key:0x182a00 00:24:09.177 [2024-07-26 22:10:20.146620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195dff80 len:0x10000 key:0x182a00 00:24:09.177 [2024-07-26 22:10:20.146676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ff680 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000046f200 len:0x10000 key:0x183800 00:24:09.177 [2024-07-26 22:10:20.146718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183700 00:24:09.177 [2024-07-26 22:10:20.146738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011322000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b83000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011b62000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0bf000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d09e000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110f1000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000110d0000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001275f000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001273e000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001271d000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012846000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.146983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.146995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012825000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.147004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.147015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x184300 00:24:09.177 [2024-07-26 22:10:20.147024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.149263] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257400 was disconnected and freed. reset controller. 00:24:09.177 [2024-07-26 22:10:20.149285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198bf680 len:0x10000 key:0x182c00 00:24:09.177 [2024-07-26 22:10:20.149296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.149312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001947f480 len:0x10000 key:0x182a00 00:24:09.177 [2024-07-26 22:10:20.149322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.149334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001945f380 len:0x10000 key:0x182a00 00:24:09.177 [2024-07-26 22:10:20.149347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.149358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182b00 00:24:09.177 [2024-07-26 22:10:20.149369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.149380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:24:09.177 [2024-07-26 22:10:20.149390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.177 [2024-07-26 22:10:20.149402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001997fc80 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.149437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:24:09.178 [2024-07-26 22:10:20.149460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001995fb80 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001946f400 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.149504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.149526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001987f480 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.149590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.149611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001943f280 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.149637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:24:09.178 [2024-07-26 22:10:20.149658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.149680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194af600 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.149702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001981f180 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001996fc00 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198af600 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.149809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199afe00 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001968fb00 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.149852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001960f700 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.149874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.149918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001983f280 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198cf700 len:0x10000 key:0x182c00 00:24:09.178 [2024-07-26 22:10:20.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.149994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:24:09.178 [2024-07-26 22:10:20.150004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.178 [2024-07-26 22:10:20.150016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:24:09.178 [2024-07-26 22:10:20.150025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.150037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182b00 00:24:09.179 [2024-07-26 22:10:20.150048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.150059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:24:09.179 [2024-07-26 22:10:20.150069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.150081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001989f580 len:0x10000 key:0x182c00 00:24:09.179 [2024-07-26 22:10:20.150090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.150102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182c00 00:24:09.179 [2024-07-26 22:10:20.150111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.150123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182c00 00:24:09.179 [2024-07-26 22:10:20.150133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b8fd00 len:0x10000 key:0x182d00 00:24:09.179 [2024-07-26 22:10:20.159140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199dff80 len:0x10000 key:0x182c00 00:24:09.179 [2024-07-26 22:10:20.159177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001988f500 len:0x10000 key:0x182c00 00:24:09.179 [2024-07-26 22:10:20.159206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001965f980 len:0x10000 key:0x182b00 00:24:09.179 [2024-07-26 22:10:20.159233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199bfe80 len:0x10000 key:0x182c00 00:24:09.179 [2024-07-26 22:10:20.159262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115d7000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115f8000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011619000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001163a000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001165b000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c4e3000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c504000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011931000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011910000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a35000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012a14000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129f3000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129d2000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000129b1000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.159851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x184300 00:24:09.179 [2024-07-26 22:10:20.159863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.162074] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192571c0 was disconnected and freed. reset controller. 00:24:09.179 [2024-07-26 22:10:20.162105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:24:09.179 [2024-07-26 22:10:20.162119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.162137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:24:09.179 [2024-07-26 22:10:20.162150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.162165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c8f500 len:0x10000 key:0x182e00 00:24:09.179 [2024-07-26 22:10:20.162178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.162193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:24:09.179 [2024-07-26 22:10:20.162206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.179 [2024-07-26 22:10:20.162221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:24:09.179 [2024-07-26 22:10:20.162234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a1f780 len:0x10000 key:0x182d00 00:24:09.180 [2024-07-26 22:10:20.162261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fbfe80 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d6fc00 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a0f700 len:0x10000 key:0x182d00 00:24:09.180 [2024-07-26 22:10:20.162430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f4fb00 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f8fd00 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c2f200 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x182d00 00:24:09.180 [2024-07-26 22:10:20.162715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019df0000 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c4f300 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ff0000 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fdff80 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.162936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.162979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cbf680 len:0x10000 key:0x182e00 00:24:09.180 [2024-07-26 22:10:20.162994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.163022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e7f480 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.163051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.163079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.163107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.163135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f5fb80 len:0x10000 key:0x182f00 00:24:09.180 [2024-07-26 22:10:20.163163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebd1000 len:0x10000 key:0x184300 00:24:09.180 [2024-07-26 22:10:20.163190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ebb0000 len:0x10000 key:0x184300 00:24:09.180 [2024-07-26 22:10:20.163218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.180 [2024-07-26 22:10:20.163234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000115b6000 len:0x10000 key:0x184300 00:24:09.180 [2024-07-26 22:10:20.163246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011595000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011574000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012b1c000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012afb000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ada000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c45000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c24000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012c03000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012be2000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012bc1000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012ba0000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b949000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b928000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b907000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8e6000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.163894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x184300 00:24:09.181 [2024-07-26 22:10:20.163907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166129] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f80 was disconnected and freed. reset controller. 00:24:09.181 [2024-07-26 22:10:20.166155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182f00 00:24:09.181 [2024-07-26 22:10:20.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183300 00:24:09.181 [2024-07-26 22:10:20.166205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:24:09.181 [2024-07-26 22:10:20.166234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:24:09.181 [2024-07-26 22:10:20.166262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x184000 00:24:09.181 [2024-07-26 22:10:20.166289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:24:09.181 [2024-07-26 22:10:20.166317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:24:09.181 [2024-07-26 22:10:20.166345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a55fb80 len:0x10000 key:0x184000 00:24:09.181 [2024-07-26 22:10:20.166372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183300 00:24:09.181 [2024-07-26 22:10:20.166400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:24:09.181 [2024-07-26 22:10:20.166427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a53fa80 len:0x10000 key:0x184000 00:24:09.181 [2024-07-26 22:10:20.166455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x184000 00:24:09.181 [2024-07-26 22:10:20.166483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.181 [2024-07-26 22:10:20.166500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:09.182 [2024-07-26 22:10:20.166716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:24:09.182 [2024-07-26 22:10:20.166772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:24:09.182 [2024-07-26 22:10:20.166800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x184000 00:24:09.182 [2024-07-26 22:10:20.166856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:09.182 [2024-07-26 22:10:20.166886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.166914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:09.182 [2024-07-26 22:10:20.166944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x184000 00:24:09.182 [2024-07-26 22:10:20.166972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.166987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.167000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.167027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x184000 00:24:09.182 [2024-07-26 22:10:20.167055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x184000 00:24:09.182 [2024-07-26 22:10:20.167083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:24:09.182 [2024-07-26 22:10:20.167110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:09.182 [2024-07-26 22:10:20.167138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.167166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:24:09.182 [2024-07-26 22:10:20.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x183300 00:24:09.182 [2024-07-26 22:10:20.167224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d0b000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012cea000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e55000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e34000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012e13000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012df2000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012dd1000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012db0000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x184300 00:24:09.182 [2024-07-26 22:10:20.167557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.182 [2024-07-26 22:10:20.167571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fcf6000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd17000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fd38000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120cc000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ab000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.167978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.167993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x184300 00:24:09.183 [2024-07-26 22:10:20.168006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170392] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256d40 was disconnected and freed. reset controller. 00:24:09.183 [2024-07-26 22:10:20.170420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183600 00:24:09.183 [2024-07-26 22:10:20.170435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183600 00:24:09.183 [2024-07-26 22:10:20.170528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183600 00:24:09.183 [2024-07-26 22:10:20.170588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x184000 00:24:09.183 [2024-07-26 22:10:20.170616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183600 00:24:09.183 [2024-07-26 22:10:20.170704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183a00 00:24:09.183 [2024-07-26 22:10:20.170759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183600 00:24:09.183 [2024-07-26 22:10:20.170787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183600 00:24:09.183 [2024-07-26 22:10:20.170814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.183 [2024-07-26 22:10:20.170829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.170842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.170856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.170871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.170887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.170900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.170915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.170928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.170956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.170970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.170998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183a00 00:24:09.184 [2024-07-26 22:10:20.171385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183600 00:24:09.184 [2024-07-26 22:10:20.171458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f7e000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f5d000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f3c000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012f1b000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012efa000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013065000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013044000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013023000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013002000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fe1000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012fc0000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x184300 00:24:09.184 [2024-07-26 22:10:20.171865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.184 [2024-07-26 22:10:20.171880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.171893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.171909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.171922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.171936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.171949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.171964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.171977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.171992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011db4000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011dd5000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da07000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da28000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da49000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000da6a000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.172241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x184300 00:24:09.185 [2024-07-26 22:10:20.172254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174012] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256b00 was disconnected and freed. reset controller. 00:24:09.185 [2024-07-26 22:10:20.174037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183900 00:24:09.185 [2024-07-26 22:10:20.174166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x183500 00:24:09.185 [2024-07-26 22:10:20.174193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183500 00:24:09.185 [2024-07-26 22:10:20.174221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af0f900 len:0x10000 key:0x183900 00:24:09.185 [2024-07-26 22:10:20.174253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x183500 00:24:09.185 [2024-07-26 22:10:20.174313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeef800 len:0x10000 key:0x183900 00:24:09.185 [2024-07-26 22:10:20.174340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183900 00:24:09.185 [2024-07-26 22:10:20.174368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183900 00:24:09.185 [2024-07-26 22:10:20.174507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x183500 00:24:09.185 [2024-07-26 22:10:20.174535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183c00 00:24:09.185 [2024-07-26 22:10:20.174568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x183500 00:24:09.185 [2024-07-26 22:10:20.174596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x183500 00:24:09.185 [2024-07-26 22:10:20.174624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.185 [2024-07-26 22:10:20.174645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183900 00:24:09.185 [2024-07-26 22:10:20.174658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183900 00:24:09.186 [2024-07-26 22:10:20.174686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183c00 00:24:09.186 [2024-07-26 22:10:20.174714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183c00 00:24:09.186 [2024-07-26 22:10:20.174743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183c00 00:24:09.186 [2024-07-26 22:10:20.174770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183900 00:24:09.186 [2024-07-26 22:10:20.174798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183c00 00:24:09.186 [2024-07-26 22:10:20.174825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183c00 00:24:09.186 [2024-07-26 22:10:20.174853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183900 00:24:09.186 [2024-07-26 22:10:20.174882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183900 00:24:09.186 [2024-07-26 22:10:20.174911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x183500 00:24:09.186 [2024-07-26 22:10:20.174938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x183500 00:24:09.186 [2024-07-26 22:10:20.174966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.174981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183900 00:24:09.186 [2024-07-26 22:10:20.174994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x183500 00:24:09.186 [2024-07-26 22:10:20.175024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183c00 00:24:09.186 [2024-07-26 22:10:20.175054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128a9000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012888000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x184300 00:24:09.186 [2024-07-26 22:10:20.175696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.186 [2024-07-26 22:10:20.175712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x184300 00:24:09.187 [2024-07-26 22:10:20.175725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.175740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x184300 00:24:09.187 [2024-07-26 22:10:20.175753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.175768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x184300 00:24:09.187 [2024-07-26 22:10:20.175780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.175795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x184300 00:24:09.187 [2024-07-26 22:10:20.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.175822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x184300 00:24:09.187 [2024-07-26 22:10:20.175836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.175852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x184300 00:24:09.187 [2024-07-26 22:10:20.175868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.177965] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192568c0 was disconnected and freed. reset controller. 00:24:09.187 [2024-07-26 22:10:20.177994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x183900 00:24:09.187 [2024-07-26 22:10:20.178008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x183900 00:24:09.187 [2024-07-26 22:10:20.178095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f800 len:0x10000 key:0x183f00 00:24:09.187 [2024-07-26 22:10:20.178175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0dfd80 len:0x10000 key:0x183f00 00:24:09.187 [2024-07-26 22:10:20.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x184200 00:24:09.187 [2024-07-26 22:10:20.178249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f900 len:0x10000 key:0x183f00 00:24:09.187 [2024-07-26 22:10:20.178332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x184200 00:24:09.187 [2024-07-26 22:10:20.178371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x183900 00:24:09.187 [2024-07-26 22:10:20.178447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x183900 00:24:09.187 [2024-07-26 22:10:20.178482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09fb80 len:0x10000 key:0x183f00 00:24:09.187 [2024-07-26 22:10:20.178631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f780 len:0x10000 key:0x183f00 00:24:09.187 [2024-07-26 22:10:20.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0afc00 len:0x10000 key:0x183f00 00:24:09.187 [2024-07-26 22:10:20.178719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183100 00:24:09.187 [2024-07-26 22:10:20.178794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x183900 00:24:09.187 [2024-07-26 22:10:20.178830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.187 [2024-07-26 22:10:20.178849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183100 00:24:09.188 [2024-07-26 22:10:20.178866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.178884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x183900 00:24:09.188 [2024-07-26 22:10:20.178901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.178920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183100 00:24:09.188 [2024-07-26 22:10:20.178937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.178956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x183900 00:24:09.188 [2024-07-26 22:10:20.178973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.178993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183100 00:24:09.188 [2024-07-26 22:10:20.179010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183100 00:24:09.188 [2024-07-26 22:10:20.179044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183100 00:24:09.188 [2024-07-26 22:10:20.179079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f980 len:0x10000 key:0x183f00 00:24:09.188 [2024-07-26 22:10:20.179113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x183900 00:24:09.188 [2024-07-26 22:10:20.179152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183100 00:24:09.188 [2024-07-26 22:10:20.179188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f700 len:0x10000 key:0x183f00 00:24:09.188 [2024-07-26 22:10:20.179223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x183900 00:24:09.188 [2024-07-26 22:10:20.179264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.179970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.179988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.180029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.180064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.180083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.180099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.180118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x184300 00:24:09.188 [2024-07-26 22:10:20.180134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.188 [2024-07-26 22:10:20.180153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca4d000 len:0x10000 key:0x184300 00:24:09.189 [2024-07-26 22:10:20.180169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.180187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca2c000 len:0x10000 key:0x184300 00:24:09.189 [2024-07-26 22:10:20.180204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.180223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0b000 len:0x10000 key:0x184300 00:24:09.189 [2024-07-26 22:10:20.180239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.180258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ea000 len:0x10000 key:0x184300 00:24:09.189 [2024-07-26 22:10:20.180274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.182646] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:24:09.189 [2024-07-26 22:10:20.182719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.182780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.182841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.182885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.182947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.182992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x184200 00:24:09.189 [2024-07-26 22:10:20.183222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x184200 00:24:09.189 [2024-07-26 22:10:20.183440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x184200 00:24:09.189 [2024-07-26 22:10:20.183479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:58624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.183954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.183977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.183994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.184013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184400 00:24:09.189 [2024-07-26 22:10:20.184029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.184048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.184065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.184084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x184200 00:24:09.189 [2024-07-26 22:10:20.184100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.189 [2024-07-26 22:10:20.184120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183200 00:24:09.189 [2024-07-26 22:10:20.184136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8af600 len:0x10000 key:0x184400 00:24:09.190 [2024-07-26 22:10:20.184171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x184200 00:24:09.190 [2024-07-26 22:10:20.184241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184400 00:24:09.190 [2024-07-26 22:10:20.184462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x184200 00:24:09.190 [2024-07-26 22:10:20.184497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:24:09.190 [2024-07-26 22:10:20.184735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184400 00:24:09.190 [2024-07-26 22:10:20.184770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012699000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.184806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ed000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.184841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001210e000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.184876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001212f000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.184912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010aa0000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.184952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.184971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:52224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ac1000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.184988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ae2000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:52480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d143000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:52608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d164000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d185000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b5ad000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b58c000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e835000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.185300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e814000 len:0x10000 key:0x184300 00:24:09.190 [2024-07-26 22:10:20.185316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2326c000 sqhd:5310 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.188012] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256440 was disconnected and freed. reset controller. 00:24:09.190 [2024-07-26 22:10:20.188087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.190 [2024-07-26 22:10:20.188107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.188125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.190 [2024-07-26 22:10:20.188141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.188158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.190 [2024-07-26 22:10:20.188174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.188191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.190 [2024-07-26 22:10:20.188208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.190 [2024-07-26 22:10:20.190267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.190289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:09.191 [2024-07-26 22:10:20.190306] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.190332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.190358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.190376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.190392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.190409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.190426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.190444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.190465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.192616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.192644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:09.191 [2024-07-26 22:10:20.192659] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.192682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.192699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.192717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.192733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.192750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.192766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.192783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.192799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.194710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.194731] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.191 [2024-07-26 22:10:20.194747] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.194770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.194787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.194804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.194821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.194838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.194858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.194875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.194891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.197179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.197237] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:09.191 [2024-07-26 22:10:20.197275] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.197330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.197371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.197413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.197452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.197494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.197535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.197576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.197616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.199670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.199719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:09.191 [2024-07-26 22:10:20.199756] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.199809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.199850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.199892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.199932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.199979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.200025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.200067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.200113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.202051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.202108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:09.191 [2024-07-26 22:10:20.202146] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.202203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.202251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.202294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.202339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.202389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.202429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.202471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.202517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.204495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.204551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:09.191 [2024-07-26 22:10:20.204589] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.204764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.204810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.204851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.204892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.204933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.204974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.205014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.205054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.191 [2024-07-26 22:10:20.207130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.191 [2024-07-26 22:10:20.207178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:09.191 [2024-07-26 22:10:20.207216] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.191 [2024-07-26 22:10:20.207268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.191 [2024-07-26 22:10:20.207285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.207306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.207323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.207340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.207356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.207372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.207389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.209183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.192 [2024-07-26 22:10:20.209239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:09.192 [2024-07-26 22:10:20.209280] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.209333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.209374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.209416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.209448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.209465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.209481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.209498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.192 [2024-07-26 22:10:20.209514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:46069 cdw0:0 sqhd:9a00 p:1 m:0 dnr:0 00:24:09.192 [2024-07-26 22:10:20.227223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:09.192 [2024-07-26 22:10:20.227281] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:09.192 [2024-07-26 22:10:20.227319] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.192 [2024-07-26 22:10:20.236044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:09.192 [2024-07-26 22:10:20.236057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:09.192 [2024-07-26 22:10:20.236102] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236122] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236137] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236151] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236168] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236187] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236202] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.192 [2024-07-26 22:10:20.236290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:09.192 [2024-07-26 22:10:20.236310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:09.192 [2024-07-26 22:10:20.236324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:09.192 [2024-07-26 22:10:20.236339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:09.192 [2024-07-26 22:10:20.238400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:09.192 task offset: 83968 on job bdev=Nvme1n1 fails 00:24:09.192 00:24:09.192 Latency(us) 00:24:09.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.192 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme1n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme1n1 : 2.01 312.88 19.56 31.89 0.00 184850.77 43830.48 1080452.71 00:24:09.192 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme2n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme2n1 : 2.01 294.81 18.43 31.87 0.00 194523.80 44669.34 1160983.35 00:24:09.192 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme3n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme3n1 : 2.01 299.66 18.73 31.86 0.00 190936.24 44040.19 1154272.46 00:24:09.192 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme4n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme4n1 : 2.01 309.48 19.34 31.84 0.00 185060.06 42362.47 1147561.57 00:24:09.192 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme5n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme5n1 : 2.01 309.85 19.37 31.83 0.00 184245.03 41104.18 1140850.69 00:24:09.192 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme6n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme6n1 : 2.01 312.21 19.51 31.82 0.00 182349.18 40894.46 1134139.80 00:24:09.192 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme7n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme7n1 : 2.01 312.07 19.50 31.80 0.00 181820.35 41733.32 1127428.92 00:24:09.192 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme8n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme8n1 : 2.01 311.94 19.50 31.79 0.00 181466.99 42572.19 1120718.03 00:24:09.192 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme9n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme9n1 : 2.01 311.81 19.49 31.78 0.00 180893.45 43411.05 1120718.03 00:24:09.192 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.192 Job: Nvme10n1 ended in about 2.01 seconds with error 00:24:09.192 Verification LBA range: start 0x0 length 0x400 00:24:09.192 Nvme10n1 : 2.01 207.95 13.00 31.76 0.00 258320.28 49912.22 1114007.14 00:24:09.192 =================================================================================================================== 00:24:09.192 Total : 2982.67 186.42 318.24 0.00 190380.06 40894.46 1160983.35 00:24:09.192 [2024-07-26 22:10:20.259024] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:09.192 [2024-07-26 22:10:20.259046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:09.192 [2024-07-26 22:10:20.259060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:09.192 [2024-07-26 22:10:20.268439] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.192 [2024-07-26 22:10:20.268498] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.192 [2024-07-26 22:10:20.268542] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:24:09.192 [2024-07-26 22:10:20.268679] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.192 [2024-07-26 22:10:20.268715] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.192 [2024-07-26 22:10:20.268740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:24:09.192 [2024-07-26 22:10:20.268894] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.192 [2024-07-26 22:10:20.268928] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.268952] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:24:09.193 [2024-07-26 22:10:20.272472] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.272520] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.272546] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:24:09.193 [2024-07-26 22:10:20.272678] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.272714] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.272739] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:24:09.193 [2024-07-26 22:10:20.272858] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.272892] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.272917] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:24:09.193 [2024-07-26 22:10:20.273028] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.273061] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.273086] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:24:09.193 [2024-07-26 22:10:20.273814] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.273832] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.273842] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:24:09.193 [2024-07-26 22:10:20.273947] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.273961] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.273971] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:24:09.193 [2024-07-26 22:10:20.274076] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:09.193 [2024-07-26 22:10:20.274090] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:09.193 [2024-07-26 22:10:20.274100] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:24:09.452 22:10:20 -- target/shutdown.sh@141 -- # kill -9 2274674 00:24:09.452 22:10:20 -- target/shutdown.sh@143 -- # stoptarget 00:24:09.452 22:10:20 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:09.452 22:10:20 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:09.452 22:10:20 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:09.452 22:10:20 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:09.452 22:10:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:09.452 22:10:20 -- nvmf/common.sh@116 -- # sync 00:24:09.452 22:10:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:09.452 22:10:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:09.452 22:10:20 -- nvmf/common.sh@119 -- # set +e 00:24:09.452 22:10:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:09.452 22:10:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:09.452 rmmod nvme_rdma 00:24:09.452 rmmod nvme_fabrics 00:24:09.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 2274674 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:09.452 22:10:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:09.452 22:10:20 -- nvmf/common.sh@123 -- # set -e 00:24:09.452 22:10:20 -- nvmf/common.sh@124 -- # return 0 00:24:09.452 22:10:20 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:09.452 22:10:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:09.452 22:10:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:09.452 00:24:09.452 real 0m5.261s 00:24:09.452 user 0m17.997s 00:24:09.452 sys 0m1.402s 00:24:09.452 22:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.452 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.452 ************************************ 00:24:09.452 END TEST nvmf_shutdown_tc3 00:24:09.452 ************************************ 00:24:09.452 22:10:20 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:09.452 00:24:09.452 real 0m26.712s 00:24:09.452 user 1m14.133s 00:24:09.452 sys 0m10.484s 00:24:09.452 22:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.452 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.452 ************************************ 00:24:09.453 END TEST nvmf_shutdown 00:24:09.453 ************************************ 00:24:09.712 22:10:20 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:09.712 22:10:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:09.712 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.712 22:10:20 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:09.712 22:10:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:09.712 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.712 22:10:20 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:09.712 22:10:20 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:09.712 22:10:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:09.712 22:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.712 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.712 ************************************ 00:24:09.712 START TEST nvmf_multicontroller 00:24:09.712 ************************************ 00:24:09.712 22:10:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:09.712 * Looking for test storage... 00:24:09.712 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:09.712 22:10:20 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.712 22:10:20 -- nvmf/common.sh@7 -- # uname -s 00:24:09.712 22:10:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.712 22:10:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.712 22:10:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.712 22:10:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.712 22:10:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.712 22:10:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.712 22:10:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.712 22:10:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.712 22:10:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.712 22:10:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.712 22:10:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:09.712 22:10:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:09.712 22:10:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.712 22:10:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.712 22:10:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.712 22:10:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:09.712 22:10:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.712 22:10:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.712 22:10:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.712 22:10:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.712 22:10:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.712 22:10:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.712 22:10:20 -- paths/export.sh@5 -- # export PATH 00:24:09.712 22:10:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.712 22:10:20 -- nvmf/common.sh@46 -- # : 0 00:24:09.712 22:10:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:09.712 22:10:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:09.712 22:10:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:09.712 22:10:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.712 22:10:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.712 22:10:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:09.713 22:10:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:09.713 22:10:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:09.713 22:10:20 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.713 22:10:20 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.713 22:10:20 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:09.713 22:10:20 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:09.713 22:10:20 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.713 22:10:20 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:09.713 22:10:20 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:09.713 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:09.713 22:10:20 -- host/multicontroller.sh@20 -- # exit 0 00:24:09.713 00:24:09.713 real 0m0.141s 00:24:09.713 user 0m0.062s 00:24:09.713 sys 0m0.090s 00:24:09.713 22:10:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:09.713 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.713 ************************************ 00:24:09.713 END TEST nvmf_multicontroller 00:24:09.713 ************************************ 00:24:09.972 22:10:20 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:09.972 22:10:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:09.972 22:10:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.972 22:10:20 -- common/autotest_common.sh@10 -- # set +x 00:24:09.972 ************************************ 00:24:09.972 START TEST nvmf_aer 00:24:09.972 ************************************ 00:24:09.972 22:10:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:09.972 * Looking for test storage... 00:24:09.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:09.972 22:10:21 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.972 22:10:21 -- nvmf/common.sh@7 -- # uname -s 00:24:09.972 22:10:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.972 22:10:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.972 22:10:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.972 22:10:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.972 22:10:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.972 22:10:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.972 22:10:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.972 22:10:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.972 22:10:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.972 22:10:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.972 22:10:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:09.972 22:10:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:09.972 22:10:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.972 22:10:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.972 22:10:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.972 22:10:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:09.972 22:10:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.972 22:10:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.972 22:10:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.972 22:10:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.972 22:10:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.972 22:10:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.972 22:10:21 -- paths/export.sh@5 -- # export PATH 00:24:09.972 22:10:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.972 22:10:21 -- nvmf/common.sh@46 -- # : 0 00:24:09.972 22:10:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:09.972 22:10:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:09.972 22:10:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:09.972 22:10:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.973 22:10:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.973 22:10:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:09.973 22:10:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:09.973 22:10:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:09.973 22:10:21 -- host/aer.sh@11 -- # nvmftestinit 00:24:09.973 22:10:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:09.973 22:10:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.973 22:10:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:09.973 22:10:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:09.973 22:10:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:09.973 22:10:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.973 22:10:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.973 22:10:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.973 22:10:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:09.973 22:10:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:09.973 22:10:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:09.973 22:10:21 -- common/autotest_common.sh@10 -- # set +x 00:24:18.123 22:10:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:18.123 22:10:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:18.123 22:10:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:18.123 22:10:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:18.123 22:10:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:18.123 22:10:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:18.123 22:10:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:18.123 22:10:28 -- nvmf/common.sh@294 -- # net_devs=() 00:24:18.123 22:10:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:18.123 22:10:28 -- nvmf/common.sh@295 -- # e810=() 00:24:18.123 22:10:28 -- nvmf/common.sh@295 -- # local -ga e810 00:24:18.123 22:10:28 -- nvmf/common.sh@296 -- # x722=() 00:24:18.123 22:10:28 -- nvmf/common.sh@296 -- # local -ga x722 00:24:18.123 22:10:28 -- nvmf/common.sh@297 -- # mlx=() 00:24:18.123 22:10:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:18.123 22:10:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.123 22:10:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:18.123 22:10:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:18.123 22:10:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:18.123 22:10:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:18.123 22:10:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:18.123 22:10:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:18.123 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:18.123 22:10:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:18.123 22:10:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:18.123 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:18.123 22:10:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:18.123 22:10:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:18.123 22:10:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.123 22:10:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:18.123 22:10:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.123 22:10:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:18.123 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:18.123 22:10:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.123 22:10:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.123 22:10:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:18.123 22:10:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.123 22:10:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:18.123 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:18.123 22:10:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.123 22:10:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:18.123 22:10:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:18.123 22:10:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:18.123 22:10:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:18.123 22:10:28 -- nvmf/common.sh@57 -- # uname 00:24:18.123 22:10:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:18.123 22:10:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:18.123 22:10:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:18.123 22:10:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:18.123 22:10:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:18.123 22:10:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:18.123 22:10:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:18.123 22:10:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:18.123 22:10:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:18.123 22:10:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:18.123 22:10:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:18.123 22:10:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:18.123 22:10:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:18.123 22:10:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:18.123 22:10:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:18.123 22:10:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:18.123 22:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:18.123 22:10:28 -- nvmf/common.sh@104 -- # continue 2 00:24:18.123 22:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:18.123 22:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:18.123 22:10:28 -- nvmf/common.sh@104 -- # continue 2 00:24:18.123 22:10:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:18.123 22:10:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:18.123 22:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:18.123 22:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:18.123 22:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:18.123 22:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:18.123 22:10:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:18.123 22:10:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:18.123 22:10:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:18.123 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:18.124 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:18.124 altname enp217s0f0np0 00:24:18.124 altname ens818f0np0 00:24:18.124 inet 192.168.100.8/24 scope global mlx_0_0 00:24:18.124 valid_lft forever preferred_lft forever 00:24:18.124 22:10:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:18.124 22:10:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:18.124 22:10:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:18.124 22:10:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:18.124 22:10:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:18.124 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:18.124 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:18.124 altname enp217s0f1np1 00:24:18.124 altname ens818f1np1 00:24:18.124 inet 192.168.100.9/24 scope global mlx_0_1 00:24:18.124 valid_lft forever preferred_lft forever 00:24:18.124 22:10:28 -- nvmf/common.sh@410 -- # return 0 00:24:18.124 22:10:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:18.124 22:10:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:18.124 22:10:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:18.124 22:10:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:18.124 22:10:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:18.124 22:10:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:18.124 22:10:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:18.124 22:10:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:18.124 22:10:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:18.124 22:10:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:18.124 22:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:18.124 22:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:18.124 22:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:18.124 22:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:18.124 22:10:28 -- nvmf/common.sh@104 -- # continue 2 00:24:18.124 22:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:18.124 22:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:18.124 22:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:18.124 22:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:18.124 22:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:18.124 22:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@104 -- # continue 2 00:24:18.124 22:10:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:18.124 22:10:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:18.124 22:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:18.124 22:10:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:18.124 22:10:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:18.124 22:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:18.124 22:10:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:18.124 192.168.100.9' 00:24:18.124 22:10:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:18.124 192.168.100.9' 00:24:18.124 22:10:28 -- nvmf/common.sh@445 -- # head -n 1 00:24:18.124 22:10:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:18.124 22:10:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:18.124 192.168.100.9' 00:24:18.124 22:10:28 -- nvmf/common.sh@446 -- # tail -n +2 00:24:18.124 22:10:28 -- nvmf/common.sh@446 -- # head -n 1 00:24:18.124 22:10:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:18.124 22:10:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:18.124 22:10:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:18.124 22:10:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:18.124 22:10:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:18.124 22:10:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:18.124 22:10:28 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:18.124 22:10:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:18.124 22:10:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:18.124 22:10:28 -- common/autotest_common.sh@10 -- # set +x 00:24:18.124 22:10:28 -- nvmf/common.sh@469 -- # nvmfpid=2279239 00:24:18.124 22:10:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:18.124 22:10:28 -- nvmf/common.sh@470 -- # waitforlisten 2279239 00:24:18.124 22:10:28 -- common/autotest_common.sh@819 -- # '[' -z 2279239 ']' 00:24:18.124 22:10:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.124 22:10:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:18.124 22:10:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.124 22:10:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:18.124 22:10:28 -- common/autotest_common.sh@10 -- # set +x 00:24:18.124 [2024-07-26 22:10:28.692873] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:18.124 [2024-07-26 22:10:28.692923] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.124 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.124 [2024-07-26 22:10:28.780355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.124 [2024-07-26 22:10:28.817367] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:18.124 [2024-07-26 22:10:28.817484] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.124 [2024-07-26 22:10:28.817494] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.124 [2024-07-26 22:10:28.817503] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.124 [2024-07-26 22:10:28.817554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.124 [2024-07-26 22:10:28.817682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.124 [2024-07-26 22:10:28.817708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.124 [2024-07-26 22:10:28.817710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.383 22:10:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:18.383 22:10:29 -- common/autotest_common.sh@852 -- # return 0 00:24:18.383 22:10:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:18.383 22:10:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:18.383 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.383 22:10:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.383 22:10:29 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:18.383 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.383 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.383 [2024-07-26 22:10:29.566262] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12e54b0/0x12e99a0) succeed. 00:24:18.383 [2024-07-26 22:10:29.576430] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12e6aa0/0x132b030) succeed. 00:24:18.643 22:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.643 22:10:29 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:18.643 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.643 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 Malloc0 00:24:18.643 22:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.643 22:10:29 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:18.643 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.643 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 22:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.643 22:10:29 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.643 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.643 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 22:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.643 22:10:29 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:18.643 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.643 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 [2024-07-26 22:10:29.742158] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:18.643 22:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.643 22:10:29 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:18.643 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.643 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.643 [2024-07-26 22:10:29.749799] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:18.643 [ 00:24:18.643 { 00:24:18.643 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:18.643 "subtype": "Discovery", 00:24:18.643 "listen_addresses": [], 00:24:18.643 "allow_any_host": true, 00:24:18.643 "hosts": [] 00:24:18.643 }, 00:24:18.643 { 00:24:18.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.643 "subtype": "NVMe", 00:24:18.643 "listen_addresses": [ 00:24:18.643 { 00:24:18.643 "transport": "RDMA", 00:24:18.643 "trtype": "RDMA", 00:24:18.643 "adrfam": "IPv4", 00:24:18.643 "traddr": "192.168.100.8", 00:24:18.643 "trsvcid": "4420" 00:24:18.643 } 00:24:18.643 ], 00:24:18.643 "allow_any_host": true, 00:24:18.643 "hosts": [], 00:24:18.643 "serial_number": "SPDK00000000000001", 00:24:18.643 "model_number": "SPDK bdev Controller", 00:24:18.643 "max_namespaces": 2, 00:24:18.643 "min_cntlid": 1, 00:24:18.643 "max_cntlid": 65519, 00:24:18.643 "namespaces": [ 00:24:18.643 { 00:24:18.643 "nsid": 1, 00:24:18.643 "bdev_name": "Malloc0", 00:24:18.643 "name": "Malloc0", 00:24:18.643 "nguid": "F0EA8C6EC8E24D92BFBCB9E4D7033144", 00:24:18.643 "uuid": "f0ea8c6e-c8e2-4d92-bfbc-b9e4d7033144" 00:24:18.643 } 00:24:18.643 ] 00:24:18.643 } 00:24:18.643 ] 00:24:18.643 22:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.643 22:10:29 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:18.643 22:10:29 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:18.643 22:10:29 -- host/aer.sh@33 -- # aerpid=2279530 00:24:18.643 22:10:29 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:18.643 22:10:29 -- common/autotest_common.sh@1244 -- # local i=0 00:24:18.643 22:10:29 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:18.643 22:10:29 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:18.643 22:10:29 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:24:18.643 22:10:29 -- common/autotest_common.sh@1247 -- # i=1 00:24:18.643 22:10:29 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:18.643 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.903 22:10:29 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:18.903 22:10:29 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:24:18.903 22:10:29 -- common/autotest_common.sh@1247 -- # i=2 00:24:18.903 22:10:29 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:18.903 22:10:29 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:18.903 22:10:29 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:18.903 22:10:29 -- common/autotest_common.sh@1255 -- # return 0 00:24:18.903 22:10:29 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:18.903 22:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.903 22:10:29 -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 Malloc1 00:24:18.903 22:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.903 22:10:30 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:18.903 22:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.903 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 22:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.903 22:10:30 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:18.903 22:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.903 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 [ 00:24:18.903 { 00:24:18.903 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:18.903 "subtype": "Discovery", 00:24:18.903 "listen_addresses": [], 00:24:18.903 "allow_any_host": true, 00:24:18.903 "hosts": [] 00:24:18.903 }, 00:24:18.903 { 00:24:18.903 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.903 "subtype": "NVMe", 00:24:18.903 "listen_addresses": [ 00:24:18.903 { 00:24:18.903 "transport": "RDMA", 00:24:18.903 "trtype": "RDMA", 00:24:18.903 "adrfam": "IPv4", 00:24:18.903 "traddr": "192.168.100.8", 00:24:18.903 "trsvcid": "4420" 00:24:18.903 } 00:24:18.903 ], 00:24:18.903 "allow_any_host": true, 00:24:18.903 "hosts": [], 00:24:18.903 "serial_number": "SPDK00000000000001", 00:24:18.903 "model_number": "SPDK bdev Controller", 00:24:18.903 "max_namespaces": 2, 00:24:18.903 "min_cntlid": 1, 00:24:18.903 "max_cntlid": 65519, 00:24:18.903 "namespaces": [ 00:24:18.903 { 00:24:18.903 "nsid": 1, 00:24:18.903 "bdev_name": "Malloc0", 00:24:18.903 "name": "Malloc0", 00:24:18.903 "nguid": "F0EA8C6EC8E24D92BFBCB9E4D7033144", 00:24:18.903 "uuid": "f0ea8c6e-c8e2-4d92-bfbc-b9e4d7033144" 00:24:18.903 }, 00:24:18.903 { 00:24:18.903 "nsid": 2, 00:24:18.903 "bdev_name": "Malloc1", 00:24:18.903 "name": "Malloc1", 00:24:18.903 "nguid": "0C295C548AC44F53BB9B358359F2FFD9", 00:24:18.903 "uuid": "0c295c54-8ac4-4f53-bb9b-358359f2ffd9" 00:24:18.903 } 00:24:18.903 ] 00:24:18.903 } 00:24:18.903 ] 00:24:18.903 22:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.903 22:10:30 -- host/aer.sh@43 -- # wait 2279530 00:24:18.903 Asynchronous Event Request test 00:24:18.903 Attaching to 192.168.100.8 00:24:18.903 Attached to 192.168.100.8 00:24:18.903 Registering asynchronous event callbacks... 00:24:18.903 Starting namespace attribute notice tests for all controllers... 00:24:18.903 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:18.903 aer_cb - Changed Namespace 00:24:18.903 Cleaning up... 00:24:18.903 22:10:30 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:18.903 22:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.903 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 22:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.903 22:10:30 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:18.903 22:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.903 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:18.903 22:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.903 22:10:30 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.903 22:10:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.903 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:19.162 22:10:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.162 22:10:30 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:19.162 22:10:30 -- host/aer.sh@51 -- # nvmftestfini 00:24:19.162 22:10:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:19.162 22:10:30 -- nvmf/common.sh@116 -- # sync 00:24:19.162 22:10:30 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:19.162 22:10:30 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:19.162 22:10:30 -- nvmf/common.sh@119 -- # set +e 00:24:19.162 22:10:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:19.162 22:10:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:19.162 rmmod nvme_rdma 00:24:19.163 rmmod nvme_fabrics 00:24:19.163 22:10:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:19.163 22:10:30 -- nvmf/common.sh@123 -- # set -e 00:24:19.163 22:10:30 -- nvmf/common.sh@124 -- # return 0 00:24:19.163 22:10:30 -- nvmf/common.sh@477 -- # '[' -n 2279239 ']' 00:24:19.163 22:10:30 -- nvmf/common.sh@478 -- # killprocess 2279239 00:24:19.163 22:10:30 -- common/autotest_common.sh@926 -- # '[' -z 2279239 ']' 00:24:19.163 22:10:30 -- common/autotest_common.sh@930 -- # kill -0 2279239 00:24:19.163 22:10:30 -- common/autotest_common.sh@931 -- # uname 00:24:19.163 22:10:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:19.163 22:10:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2279239 00:24:19.163 22:10:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:19.163 22:10:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:19.163 22:10:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2279239' 00:24:19.163 killing process with pid 2279239 00:24:19.163 22:10:30 -- common/autotest_common.sh@945 -- # kill 2279239 00:24:19.163 [2024-07-26 22:10:30.234744] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:19.163 22:10:30 -- common/autotest_common.sh@950 -- # wait 2279239 00:24:19.422 22:10:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:19.422 22:10:30 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:19.422 00:24:19.422 real 0m9.528s 00:24:19.422 user 0m8.494s 00:24:19.422 sys 0m6.270s 00:24:19.422 22:10:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:19.422 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:19.422 ************************************ 00:24:19.422 END TEST nvmf_aer 00:24:19.422 ************************************ 00:24:19.422 22:10:30 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:19.422 22:10:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:19.422 22:10:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:19.422 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:19.422 ************************************ 00:24:19.422 START TEST nvmf_async_init 00:24:19.422 ************************************ 00:24:19.422 22:10:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:19.422 * Looking for test storage... 00:24:19.422 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:19.422 22:10:30 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.422 22:10:30 -- nvmf/common.sh@7 -- # uname -s 00:24:19.422 22:10:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.422 22:10:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.422 22:10:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.422 22:10:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.422 22:10:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.422 22:10:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.422 22:10:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.422 22:10:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.422 22:10:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.682 22:10:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.682 22:10:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:19.682 22:10:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:19.682 22:10:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.682 22:10:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.682 22:10:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.682 22:10:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:19.682 22:10:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.682 22:10:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.682 22:10:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.682 22:10:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.682 22:10:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.682 22:10:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.682 22:10:30 -- paths/export.sh@5 -- # export PATH 00:24:19.682 22:10:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.682 22:10:30 -- nvmf/common.sh@46 -- # : 0 00:24:19.682 22:10:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:19.682 22:10:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:19.682 22:10:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:19.682 22:10:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.682 22:10:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.682 22:10:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:19.682 22:10:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:19.682 22:10:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:19.682 22:10:30 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:19.682 22:10:30 -- host/async_init.sh@14 -- # null_block_size=512 00:24:19.682 22:10:30 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:19.682 22:10:30 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:19.682 22:10:30 -- host/async_init.sh@20 -- # uuidgen 00:24:19.682 22:10:30 -- host/async_init.sh@20 -- # tr -d - 00:24:19.682 22:10:30 -- host/async_init.sh@20 -- # nguid=171d17bfc5e1452ca302661180722dfc 00:24:19.682 22:10:30 -- host/async_init.sh@22 -- # nvmftestinit 00:24:19.682 22:10:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:19.682 22:10:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.682 22:10:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:19.682 22:10:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:19.682 22:10:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:19.682 22:10:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.682 22:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.682 22:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.682 22:10:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:19.682 22:10:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:19.682 22:10:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:19.682 22:10:30 -- common/autotest_common.sh@10 -- # set +x 00:24:27.800 22:10:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:27.800 22:10:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:27.800 22:10:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:27.800 22:10:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:27.800 22:10:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:27.800 22:10:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:27.800 22:10:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:27.800 22:10:37 -- nvmf/common.sh@294 -- # net_devs=() 00:24:27.800 22:10:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:27.800 22:10:37 -- nvmf/common.sh@295 -- # e810=() 00:24:27.800 22:10:37 -- nvmf/common.sh@295 -- # local -ga e810 00:24:27.800 22:10:37 -- nvmf/common.sh@296 -- # x722=() 00:24:27.800 22:10:37 -- nvmf/common.sh@296 -- # local -ga x722 00:24:27.800 22:10:37 -- nvmf/common.sh@297 -- # mlx=() 00:24:27.800 22:10:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:27.800 22:10:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.800 22:10:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:27.800 22:10:37 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:27.800 22:10:37 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:27.800 22:10:37 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:27.800 22:10:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:27.800 22:10:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:27.800 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:27.800 22:10:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:27.800 22:10:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:27.800 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:27.800 22:10:37 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:27.800 22:10:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:27.800 22:10:37 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.800 22:10:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.800 22:10:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.800 22:10:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:27.800 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:27.800 22:10:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.800 22:10:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.800 22:10:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:27.800 22:10:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.800 22:10:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:27.800 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:27.800 22:10:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.800 22:10:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:27.800 22:10:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:27.800 22:10:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:27.800 22:10:37 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:27.800 22:10:37 -- nvmf/common.sh@57 -- # uname 00:24:27.800 22:10:37 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:27.800 22:10:37 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:27.800 22:10:37 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:27.800 22:10:37 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:27.800 22:10:37 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:27.800 22:10:37 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:27.800 22:10:37 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:27.800 22:10:37 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:27.800 22:10:37 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:27.800 22:10:37 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:27.800 22:10:37 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:27.800 22:10:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:27.800 22:10:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:27.800 22:10:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:27.800 22:10:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:27.800 22:10:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:27.800 22:10:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:27.800 22:10:37 -- nvmf/common.sh@104 -- # continue 2 00:24:27.800 22:10:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:27.800 22:10:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:27.800 22:10:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:27.800 22:10:37 -- nvmf/common.sh@104 -- # continue 2 00:24:27.800 22:10:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:27.801 22:10:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:27.801 22:10:37 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:27.801 22:10:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:27.801 22:10:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:27.801 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:27.801 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:27.801 altname enp217s0f0np0 00:24:27.801 altname ens818f0np0 00:24:27.801 inet 192.168.100.8/24 scope global mlx_0_0 00:24:27.801 valid_lft forever preferred_lft forever 00:24:27.801 22:10:37 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:27.801 22:10:37 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:27.801 22:10:37 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:27.801 22:10:37 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:27.801 22:10:37 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:27.801 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:27.801 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:27.801 altname enp217s0f1np1 00:24:27.801 altname ens818f1np1 00:24:27.801 inet 192.168.100.9/24 scope global mlx_0_1 00:24:27.801 valid_lft forever preferred_lft forever 00:24:27.801 22:10:37 -- nvmf/common.sh@410 -- # return 0 00:24:27.801 22:10:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:27.801 22:10:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:27.801 22:10:37 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:27.801 22:10:37 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:27.801 22:10:37 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:27.801 22:10:37 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:27.801 22:10:37 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:27.801 22:10:37 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:27.801 22:10:37 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:27.801 22:10:37 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:27.801 22:10:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:27.801 22:10:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:27.801 22:10:37 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:27.801 22:10:37 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@104 -- # continue 2 00:24:27.801 22:10:37 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:27.801 22:10:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:27.801 22:10:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:27.801 22:10:37 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:27.801 22:10:37 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:27.801 22:10:37 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@104 -- # continue 2 00:24:27.801 22:10:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:27.801 22:10:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:27.801 22:10:37 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:27.801 22:10:37 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:27.801 22:10:37 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:27.801 22:10:37 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:27.801 192.168.100.9' 00:24:27.801 22:10:37 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:27.801 192.168.100.9' 00:24:27.801 22:10:37 -- nvmf/common.sh@445 -- # head -n 1 00:24:27.801 22:10:37 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:27.801 22:10:37 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:27.801 192.168.100.9' 00:24:27.801 22:10:37 -- nvmf/common.sh@446 -- # tail -n +2 00:24:27.801 22:10:37 -- nvmf/common.sh@446 -- # head -n 1 00:24:27.801 22:10:37 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:27.801 22:10:37 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:27.801 22:10:37 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:27.801 22:10:37 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:27.801 22:10:37 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:27.801 22:10:37 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:27.801 22:10:37 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:27.801 22:10:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:27.801 22:10:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:27.801 22:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 22:10:37 -- nvmf/common.sh@469 -- # nvmfpid=2283452 00:24:27.801 22:10:37 -- nvmf/common.sh@470 -- # waitforlisten 2283452 00:24:27.801 22:10:37 -- common/autotest_common.sh@819 -- # '[' -z 2283452 ']' 00:24:27.801 22:10:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.801 22:10:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:27.801 22:10:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.801 22:10:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:27.801 22:10:37 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 22:10:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:27.801 [2024-07-26 22:10:38.011687] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:27.801 [2024-07-26 22:10:38.011735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.801 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.801 [2024-07-26 22:10:38.095674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.801 [2024-07-26 22:10:38.133323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:27.801 [2024-07-26 22:10:38.133429] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.801 [2024-07-26 22:10:38.133439] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.801 [2024-07-26 22:10:38.133448] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.801 [2024-07-26 22:10:38.133469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.801 22:10:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:27.801 22:10:38 -- common/autotest_common.sh@852 -- # return 0 00:24:27.801 22:10:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:27.801 22:10:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 22:10:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.801 22:10:38 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 [2024-07-26 22:10:38.867596] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b11320/0x1b15810) succeed. 00:24:27.801 [2024-07-26 22:10:38.876935] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b12820/0x1b56ea0) succeed. 00:24:27.801 22:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.801 22:10:38 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 null0 00:24:27.801 22:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.801 22:10:38 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 22:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.801 22:10:38 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 22:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.801 22:10:38 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 171d17bfc5e1452ca302661180722dfc 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 22:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.801 22:10:38 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:27.801 [2024-07-26 22:10:38.957874] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:27.801 22:10:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:27.801 22:10:38 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:27.801 22:10:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:27.801 22:10:38 -- common/autotest_common.sh@10 -- # set +x 00:24:28.061 nvme0n1 00:24:28.061 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.061 22:10:39 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.061 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.061 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.061 [ 00:24:28.061 { 00:24:28.061 "name": "nvme0n1", 00:24:28.061 "aliases": [ 00:24:28.061 "171d17bf-c5e1-452c-a302-661180722dfc" 00:24:28.061 ], 00:24:28.061 "product_name": "NVMe disk", 00:24:28.061 "block_size": 512, 00:24:28.061 "num_blocks": 2097152, 00:24:28.061 "uuid": "171d17bf-c5e1-452c-a302-661180722dfc", 00:24:28.061 "assigned_rate_limits": { 00:24:28.061 "rw_ios_per_sec": 0, 00:24:28.061 "rw_mbytes_per_sec": 0, 00:24:28.061 "r_mbytes_per_sec": 0, 00:24:28.061 "w_mbytes_per_sec": 0 00:24:28.061 }, 00:24:28.061 "claimed": false, 00:24:28.061 "zoned": false, 00:24:28.061 "supported_io_types": { 00:24:28.061 "read": true, 00:24:28.061 "write": true, 00:24:28.061 "unmap": false, 00:24:28.061 "write_zeroes": true, 00:24:28.061 "flush": true, 00:24:28.061 "reset": true, 00:24:28.061 "compare": true, 00:24:28.061 "compare_and_write": true, 00:24:28.061 "abort": true, 00:24:28.061 "nvme_admin": true, 00:24:28.061 "nvme_io": true 00:24:28.061 }, 00:24:28.061 "memory_domains": [ 00:24:28.061 { 00:24:28.061 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:28.061 "dma_device_type": 0 00:24:28.061 } 00:24:28.061 ], 00:24:28.061 "driver_specific": { 00:24:28.061 "nvme": [ 00:24:28.061 { 00:24:28.061 "trid": { 00:24:28.061 "trtype": "RDMA", 00:24:28.061 "adrfam": "IPv4", 00:24:28.061 "traddr": "192.168.100.8", 00:24:28.061 "trsvcid": "4420", 00:24:28.061 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.061 }, 00:24:28.061 "ctrlr_data": { 00:24:28.061 "cntlid": 1, 00:24:28.061 "vendor_id": "0x8086", 00:24:28.061 "model_number": "SPDK bdev Controller", 00:24:28.061 "serial_number": "00000000000000000000", 00:24:28.061 "firmware_revision": "24.01.1", 00:24:28.061 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.061 "oacs": { 00:24:28.061 "security": 0, 00:24:28.061 "format": 0, 00:24:28.061 "firmware": 0, 00:24:28.061 "ns_manage": 0 00:24:28.061 }, 00:24:28.061 "multi_ctrlr": true, 00:24:28.061 "ana_reporting": false 00:24:28.061 }, 00:24:28.061 "vs": { 00:24:28.062 "nvme_version": "1.3" 00:24:28.062 }, 00:24:28.062 "ns_data": { 00:24:28.062 "id": 1, 00:24:28.062 "can_share": true 00:24:28.062 } 00:24:28.062 } 00:24:28.062 ], 00:24:28.062 "mp_policy": "active_passive" 00:24:28.062 } 00:24:28.062 } 00:24:28.062 ] 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 [2024-07-26 22:10:39.058163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:28.062 [2024-07-26 22:10:39.081871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:28.062 [2024-07-26 22:10:39.106901] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 [ 00:24:28.062 { 00:24:28.062 "name": "nvme0n1", 00:24:28.062 "aliases": [ 00:24:28.062 "171d17bf-c5e1-452c-a302-661180722dfc" 00:24:28.062 ], 00:24:28.062 "product_name": "NVMe disk", 00:24:28.062 "block_size": 512, 00:24:28.062 "num_blocks": 2097152, 00:24:28.062 "uuid": "171d17bf-c5e1-452c-a302-661180722dfc", 00:24:28.062 "assigned_rate_limits": { 00:24:28.062 "rw_ios_per_sec": 0, 00:24:28.062 "rw_mbytes_per_sec": 0, 00:24:28.062 "r_mbytes_per_sec": 0, 00:24:28.062 "w_mbytes_per_sec": 0 00:24:28.062 }, 00:24:28.062 "claimed": false, 00:24:28.062 "zoned": false, 00:24:28.062 "supported_io_types": { 00:24:28.062 "read": true, 00:24:28.062 "write": true, 00:24:28.062 "unmap": false, 00:24:28.062 "write_zeroes": true, 00:24:28.062 "flush": true, 00:24:28.062 "reset": true, 00:24:28.062 "compare": true, 00:24:28.062 "compare_and_write": true, 00:24:28.062 "abort": true, 00:24:28.062 "nvme_admin": true, 00:24:28.062 "nvme_io": true 00:24:28.062 }, 00:24:28.062 "memory_domains": [ 00:24:28.062 { 00:24:28.062 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:28.062 "dma_device_type": 0 00:24:28.062 } 00:24:28.062 ], 00:24:28.062 "driver_specific": { 00:24:28.062 "nvme": [ 00:24:28.062 { 00:24:28.062 "trid": { 00:24:28.062 "trtype": "RDMA", 00:24:28.062 "adrfam": "IPv4", 00:24:28.062 "traddr": "192.168.100.8", 00:24:28.062 "trsvcid": "4420", 00:24:28.062 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.062 }, 00:24:28.062 "ctrlr_data": { 00:24:28.062 "cntlid": 2, 00:24:28.062 "vendor_id": "0x8086", 00:24:28.062 "model_number": "SPDK bdev Controller", 00:24:28.062 "serial_number": "00000000000000000000", 00:24:28.062 "firmware_revision": "24.01.1", 00:24:28.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.062 "oacs": { 00:24:28.062 "security": 0, 00:24:28.062 "format": 0, 00:24:28.062 "firmware": 0, 00:24:28.062 "ns_manage": 0 00:24:28.062 }, 00:24:28.062 "multi_ctrlr": true, 00:24:28.062 "ana_reporting": false 00:24:28.062 }, 00:24:28.062 "vs": { 00:24:28.062 "nvme_version": "1.3" 00:24:28.062 }, 00:24:28.062 "ns_data": { 00:24:28.062 "id": 1, 00:24:28.062 "can_share": true 00:24:28.062 } 00:24:28.062 } 00:24:28.062 ], 00:24:28.062 "mp_policy": "active_passive" 00:24:28.062 } 00:24:28.062 } 00:24:28.062 ] 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@53 -- # mktemp 00:24:28.062 22:10:39 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.x0MpzZ4ev8 00:24:28.062 22:10:39 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:28.062 22:10:39 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.x0MpzZ4ev8 00:24:28.062 22:10:39 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 [2024-07-26 22:10:39.166272] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x0MpzZ4ev8 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.x0MpzZ4ev8 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 [2024-07-26 22:10:39.182297] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:28.062 nvme0n1 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.062 [ 00:24:28.062 { 00:24:28.062 "name": "nvme0n1", 00:24:28.062 "aliases": [ 00:24:28.062 "171d17bf-c5e1-452c-a302-661180722dfc" 00:24:28.062 ], 00:24:28.062 "product_name": "NVMe disk", 00:24:28.062 "block_size": 512, 00:24:28.062 "num_blocks": 2097152, 00:24:28.062 "uuid": "171d17bf-c5e1-452c-a302-661180722dfc", 00:24:28.062 "assigned_rate_limits": { 00:24:28.062 "rw_ios_per_sec": 0, 00:24:28.062 "rw_mbytes_per_sec": 0, 00:24:28.062 "r_mbytes_per_sec": 0, 00:24:28.062 "w_mbytes_per_sec": 0 00:24:28.062 }, 00:24:28.062 "claimed": false, 00:24:28.062 "zoned": false, 00:24:28.062 "supported_io_types": { 00:24:28.062 "read": true, 00:24:28.062 "write": true, 00:24:28.062 "unmap": false, 00:24:28.062 "write_zeroes": true, 00:24:28.062 "flush": true, 00:24:28.062 "reset": true, 00:24:28.062 "compare": true, 00:24:28.062 "compare_and_write": true, 00:24:28.062 "abort": true, 00:24:28.062 "nvme_admin": true, 00:24:28.062 "nvme_io": true 00:24:28.062 }, 00:24:28.062 "memory_domains": [ 00:24:28.062 { 00:24:28.062 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:28.062 "dma_device_type": 0 00:24:28.062 } 00:24:28.062 ], 00:24:28.062 "driver_specific": { 00:24:28.062 "nvme": [ 00:24:28.062 { 00:24:28.062 "trid": { 00:24:28.062 "trtype": "RDMA", 00:24:28.062 "adrfam": "IPv4", 00:24:28.062 "traddr": "192.168.100.8", 00:24:28.062 "trsvcid": "4421", 00:24:28.062 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:28.062 }, 00:24:28.062 "ctrlr_data": { 00:24:28.062 "cntlid": 3, 00:24:28.062 "vendor_id": "0x8086", 00:24:28.062 "model_number": "SPDK bdev Controller", 00:24:28.062 "serial_number": "00000000000000000000", 00:24:28.062 "firmware_revision": "24.01.1", 00:24:28.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.062 "oacs": { 00:24:28.062 "security": 0, 00:24:28.062 "format": 0, 00:24:28.062 "firmware": 0, 00:24:28.062 "ns_manage": 0 00:24:28.062 }, 00:24:28.062 "multi_ctrlr": true, 00:24:28.062 "ana_reporting": false 00:24:28.062 }, 00:24:28.062 "vs": { 00:24:28.062 "nvme_version": "1.3" 00:24:28.062 }, 00:24:28.062 "ns_data": { 00:24:28.062 "id": 1, 00:24:28.062 "can_share": true 00:24:28.062 } 00:24:28.062 } 00:24:28.062 ], 00:24:28.062 "mp_policy": "active_passive" 00:24:28.062 } 00:24:28.062 } 00:24:28.062 ] 00:24:28.062 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.062 22:10:39 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.062 22:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:28.062 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.322 22:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:28.322 22:10:39 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.x0MpzZ4ev8 00:24:28.322 22:10:39 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:28.322 22:10:39 -- host/async_init.sh@78 -- # nvmftestfini 00:24:28.322 22:10:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:28.322 22:10:39 -- nvmf/common.sh@116 -- # sync 00:24:28.322 22:10:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:28.322 22:10:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:28.322 22:10:39 -- nvmf/common.sh@119 -- # set +e 00:24:28.322 22:10:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:28.322 22:10:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:28.322 rmmod nvme_rdma 00:24:28.322 rmmod nvme_fabrics 00:24:28.322 22:10:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:28.322 22:10:39 -- nvmf/common.sh@123 -- # set -e 00:24:28.322 22:10:39 -- nvmf/common.sh@124 -- # return 0 00:24:28.322 22:10:39 -- nvmf/common.sh@477 -- # '[' -n 2283452 ']' 00:24:28.322 22:10:39 -- nvmf/common.sh@478 -- # killprocess 2283452 00:24:28.322 22:10:39 -- common/autotest_common.sh@926 -- # '[' -z 2283452 ']' 00:24:28.322 22:10:39 -- common/autotest_common.sh@930 -- # kill -0 2283452 00:24:28.322 22:10:39 -- common/autotest_common.sh@931 -- # uname 00:24:28.322 22:10:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:28.322 22:10:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2283452 00:24:28.322 22:10:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:28.322 22:10:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:28.322 22:10:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2283452' 00:24:28.322 killing process with pid 2283452 00:24:28.322 22:10:39 -- common/autotest_common.sh@945 -- # kill 2283452 00:24:28.322 22:10:39 -- common/autotest_common.sh@950 -- # wait 2283452 00:24:28.582 22:10:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:28.582 22:10:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:28.582 00:24:28.582 real 0m9.068s 00:24:28.582 user 0m3.522s 00:24:28.582 sys 0m5.933s 00:24:28.582 22:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.582 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 ************************************ 00:24:28.582 END TEST nvmf_async_init 00:24:28.582 ************************************ 00:24:28.582 22:10:39 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:28.582 22:10:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:28.582 22:10:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:28.582 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:28.582 ************************************ 00:24:28.582 START TEST dma 00:24:28.582 ************************************ 00:24:28.582 22:10:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:28.582 * Looking for test storage... 00:24:28.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:28.582 22:10:39 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.582 22:10:39 -- nvmf/common.sh@7 -- # uname -s 00:24:28.582 22:10:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.582 22:10:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.582 22:10:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.582 22:10:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.582 22:10:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.582 22:10:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.582 22:10:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.582 22:10:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.582 22:10:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.582 22:10:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.582 22:10:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:28.582 22:10:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:28.582 22:10:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.582 22:10:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.582 22:10:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.582 22:10:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:28.582 22:10:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.582 22:10:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.582 22:10:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.582 22:10:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 22:10:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 22:10:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 22:10:39 -- paths/export.sh@5 -- # export PATH 00:24:28.582 22:10:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.582 22:10:39 -- nvmf/common.sh@46 -- # : 0 00:24:28.582 22:10:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:28.582 22:10:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:28.582 22:10:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:28.582 22:10:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.582 22:10:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.582 22:10:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:28.582 22:10:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:28.582 22:10:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:28.582 22:10:39 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:28.582 22:10:39 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:28.583 22:10:39 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:28.583 22:10:39 -- host/dma.sh@18 -- # subsystem=0 00:24:28.583 22:10:39 -- host/dma.sh@93 -- # nvmftestinit 00:24:28.583 22:10:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:28.583 22:10:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.583 22:10:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:28.583 22:10:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:28.583 22:10:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:28.583 22:10:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.583 22:10:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.583 22:10:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.583 22:10:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:28.583 22:10:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:28.583 22:10:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:28.583 22:10:39 -- common/autotest_common.sh@10 -- # set +x 00:24:36.707 22:10:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:36.707 22:10:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:36.707 22:10:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:36.707 22:10:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:36.707 22:10:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:36.707 22:10:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:36.707 22:10:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:36.707 22:10:47 -- nvmf/common.sh@294 -- # net_devs=() 00:24:36.707 22:10:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:36.707 22:10:47 -- nvmf/common.sh@295 -- # e810=() 00:24:36.707 22:10:47 -- nvmf/common.sh@295 -- # local -ga e810 00:24:36.707 22:10:47 -- nvmf/common.sh@296 -- # x722=() 00:24:36.707 22:10:47 -- nvmf/common.sh@296 -- # local -ga x722 00:24:36.707 22:10:47 -- nvmf/common.sh@297 -- # mlx=() 00:24:36.707 22:10:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:36.707 22:10:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.707 22:10:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:36.707 22:10:47 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:36.707 22:10:47 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:36.707 22:10:47 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:36.707 22:10:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:36.707 22:10:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.707 22:10:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:36.707 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:36.707 22:10:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:36.707 22:10:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.707 22:10:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:36.707 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:36.707 22:10:47 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:36.707 22:10:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:36.707 22:10:47 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:36.707 22:10:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.707 22:10:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.707 22:10:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.707 22:10:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.707 22:10:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:36.707 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:36.707 22:10:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.708 22:10:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.708 22:10:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.708 22:10:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.708 22:10:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:36.708 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.708 22:10:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:36.708 22:10:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:36.708 22:10:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:36.708 22:10:47 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:36.708 22:10:47 -- nvmf/common.sh@57 -- # uname 00:24:36.708 22:10:47 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:36.708 22:10:47 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:36.708 22:10:47 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:36.708 22:10:47 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:36.708 22:10:47 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:36.708 22:10:47 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:36.708 22:10:47 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:36.708 22:10:47 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:36.708 22:10:47 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:36.708 22:10:47 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:36.708 22:10:47 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:36.708 22:10:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:36.708 22:10:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:36.708 22:10:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:36.708 22:10:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:36.708 22:10:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:36.708 22:10:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@104 -- # continue 2 00:24:36.708 22:10:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@104 -- # continue 2 00:24:36.708 22:10:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:36.708 22:10:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:36.708 22:10:47 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:36.708 22:10:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:36.708 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:36.708 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:36.708 altname enp217s0f0np0 00:24:36.708 altname ens818f0np0 00:24:36.708 inet 192.168.100.8/24 scope global mlx_0_0 00:24:36.708 valid_lft forever preferred_lft forever 00:24:36.708 22:10:47 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:36.708 22:10:47 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:36.708 22:10:47 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:36.708 22:10:47 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:36.708 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:36.708 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:36.708 altname enp217s0f1np1 00:24:36.708 altname ens818f1np1 00:24:36.708 inet 192.168.100.9/24 scope global mlx_0_1 00:24:36.708 valid_lft forever preferred_lft forever 00:24:36.708 22:10:47 -- nvmf/common.sh@410 -- # return 0 00:24:36.708 22:10:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:36.708 22:10:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:36.708 22:10:47 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:36.708 22:10:47 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:36.708 22:10:47 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:36.708 22:10:47 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:36.708 22:10:47 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:36.708 22:10:47 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:36.708 22:10:47 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:36.708 22:10:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@104 -- # continue 2 00:24:36.708 22:10:47 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:36.708 22:10:47 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:36.708 22:10:47 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@104 -- # continue 2 00:24:36.708 22:10:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:36.708 22:10:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:36.708 22:10:47 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:36.708 22:10:47 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:36.708 22:10:47 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:36.968 22:10:47 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:36.968 192.168.100.9' 00:24:36.968 22:10:47 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:36.968 192.168.100.9' 00:24:36.968 22:10:47 -- nvmf/common.sh@445 -- # head -n 1 00:24:36.968 22:10:47 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:36.968 22:10:47 -- nvmf/common.sh@446 -- # head -n 1 00:24:36.968 22:10:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:36.968 192.168.100.9' 00:24:36.968 22:10:47 -- nvmf/common.sh@446 -- # tail -n +2 00:24:36.968 22:10:47 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:36.968 22:10:47 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:36.968 22:10:47 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:36.968 22:10:47 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:36.968 22:10:47 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:36.968 22:10:47 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:36.968 22:10:47 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:36.968 22:10:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:36.968 22:10:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:36.968 22:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 22:10:47 -- nvmf/common.sh@469 -- # nvmfpid=2287802 00:24:36.968 22:10:47 -- nvmf/common.sh@470 -- # waitforlisten 2287802 00:24:36.968 22:10:47 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:36.968 22:10:47 -- common/autotest_common.sh@819 -- # '[' -z 2287802 ']' 00:24:36.968 22:10:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.968 22:10:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:36.968 22:10:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.968 22:10:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:36.968 22:10:47 -- common/autotest_common.sh@10 -- # set +x 00:24:36.968 [2024-07-26 22:10:48.037012] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:36.968 [2024-07-26 22:10:48.037065] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.968 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.968 [2024-07-26 22:10:48.124707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:36.968 [2024-07-26 22:10:48.161287] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:36.968 [2024-07-26 22:10:48.161400] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.968 [2024-07-26 22:10:48.161409] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.968 [2024-07-26 22:10:48.161418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.968 [2024-07-26 22:10:48.161473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.968 [2024-07-26 22:10:48.161475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.905 22:10:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:37.905 22:10:48 -- common/autotest_common.sh@852 -- # return 0 00:24:37.905 22:10:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:37.905 22:10:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:37.905 22:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:37.905 22:10:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.905 22:10:48 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:37.905 22:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.905 22:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:37.905 [2024-07-26 22:10:48.896677] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb3ee80/0xb43370) succeed. 00:24:37.905 [2024-07-26 22:10:48.905543] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb40380/0xb84a00) succeed. 00:24:37.905 22:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.905 22:10:48 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:37.905 22:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.905 22:10:48 -- common/autotest_common.sh@10 -- # set +x 00:24:37.905 Malloc0 00:24:37.905 22:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.905 22:10:49 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:37.905 22:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.905 22:10:49 -- common/autotest_common.sh@10 -- # set +x 00:24:37.905 22:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.905 22:10:49 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:37.905 22:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.905 22:10:49 -- common/autotest_common.sh@10 -- # set +x 00:24:37.905 22:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.905 22:10:49 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:37.905 22:10:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.905 22:10:49 -- common/autotest_common.sh@10 -- # set +x 00:24:37.906 [2024-07-26 22:10:49.061566] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:37.906 22:10:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.906 22:10:49 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:37.906 22:10:49 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:37.906 22:10:49 -- nvmf/common.sh@520 -- # config=() 00:24:37.906 22:10:49 -- nvmf/common.sh@520 -- # local subsystem config 00:24:37.906 22:10:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:37.906 22:10:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:37.906 { 00:24:37.906 "params": { 00:24:37.906 "name": "Nvme$subsystem", 00:24:37.906 "trtype": "$TEST_TRANSPORT", 00:24:37.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.906 "adrfam": "ipv4", 00:24:37.906 "trsvcid": "$NVMF_PORT", 00:24:37.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.906 "hdgst": ${hdgst:-false}, 00:24:37.906 "ddgst": ${ddgst:-false} 00:24:37.906 }, 00:24:37.906 "method": "bdev_nvme_attach_controller" 00:24:37.906 } 00:24:37.906 EOF 00:24:37.906 )") 00:24:37.906 22:10:49 -- nvmf/common.sh@542 -- # cat 00:24:37.906 22:10:49 -- nvmf/common.sh@544 -- # jq . 00:24:37.906 22:10:49 -- nvmf/common.sh@545 -- # IFS=, 00:24:37.906 22:10:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:37.906 "params": { 00:24:37.906 "name": "Nvme0", 00:24:37.906 "trtype": "rdma", 00:24:37.906 "traddr": "192.168.100.8", 00:24:37.906 "adrfam": "ipv4", 00:24:37.906 "trsvcid": "4420", 00:24:37.906 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:37.906 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:37.906 "hdgst": false, 00:24:37.906 "ddgst": false 00:24:37.906 }, 00:24:37.906 "method": "bdev_nvme_attach_controller" 00:24:37.906 }' 00:24:37.906 [2024-07-26 22:10:49.110367] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:37.906 [2024-07-26 22:10:49.110415] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2287965 ] 00:24:38.165 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.165 [2024-07-26 22:10:49.190499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:38.165 [2024-07-26 22:10:49.227909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.165 [2024-07-26 22:10:49.227911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.430 bdev Nvme0n1 reports 1 memory domains 00:24:43.430 bdev Nvme0n1 supports RDMA memory domain 00:24:43.430 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:43.430 ========================================================================== 00:24:43.430 Latency [us] 00:24:43.430 IOPS MiB/s Average min max 00:24:43.430 Core 2: 21996.43 85.92 726.60 237.50 8665.71 00:24:43.430 Core 3: 22218.59 86.79 719.31 235.88 8671.28 00:24:43.430 ========================================================================== 00:24:43.430 Total : 44215.02 172.71 722.94 235.88 8671.28 00:24:43.430 00:24:43.430 Total operations: 221121, translate 221121 pull_push 0 memzero 0 00:24:43.430 22:10:54 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:43.430 22:10:54 -- host/dma.sh@107 -- # gen_malloc_json 00:24:43.430 22:10:54 -- host/dma.sh@21 -- # jq . 00:24:43.430 [2024-07-26 22:10:54.641168] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:43.430 [2024-07-26 22:10:54.641223] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2288979 ] 00:24:43.688 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.688 [2024-07-26 22:10:54.722297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:43.688 [2024-07-26 22:10:54.758781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.688 [2024-07-26 22:10:54.758784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.963 bdev Malloc0 reports 1 memory domains 00:24:48.963 bdev Malloc0 doesn't support RDMA memory domain 00:24:48.963 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:48.963 ========================================================================== 00:24:48.963 Latency [us] 00:24:48.963 IOPS MiB/s Average min max 00:24:48.963 Core 2: 14797.27 57.80 1080.43 384.97 1362.66 00:24:48.963 Core 3: 15019.37 58.67 1064.47 379.26 1847.84 00:24:48.963 ========================================================================== 00:24:48.963 Total : 29816.64 116.47 1072.39 379.26 1847.84 00:24:48.963 00:24:48.963 Total operations: 149147, translate 0 pull_push 596588 memzero 0 00:24:48.963 22:11:00 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:48.963 22:11:00 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:48.963 22:11:00 -- host/dma.sh@48 -- # local subsystem=0 00:24:48.963 22:11:00 -- host/dma.sh@50 -- # jq . 00:24:48.963 Ignoring -M option 00:24:48.963 [2024-07-26 22:11:00.106285] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:48.963 [2024-07-26 22:11:00.106342] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289851 ] 00:24:48.963 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.963 [2024-07-26 22:11:00.187014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:49.287 [2024-07-26 22:11:00.224421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.287 [2024-07-26 22:11:00.224425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.288 [2024-07-26 22:11:00.425100] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:54.557 [2024-07-26 22:11:05.454237] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:54.557 bdev df8d1da4-3eeb-4507-9b94-759d81ae0bec reports 1 memory domains 00:24:54.557 bdev df8d1da4-3eeb-4507-9b94-759d81ae0bec supports RDMA memory domain 00:24:54.557 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:54.557 ========================================================================== 00:24:54.557 Latency [us] 00:24:54.557 IOPS MiB/s Average min max 00:24:54.557 Core 2: 72568.93 283.47 219.62 71.60 1625.07 00:24:54.557 Core 3: 70250.76 274.42 226.84 62.18 1550.90 00:24:54.557 ========================================================================== 00:24:54.557 Total : 142819.69 557.89 223.17 62.18 1625.07 00:24:54.557 00:24:54.557 Total operations: 714168, translate 0 pull_push 0 memzero 714168 00:24:54.557 22:11:05 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:54.557 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.557 [2024-07-26 22:11:05.749134] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:57.086 Initializing NVMe Controllers 00:24:57.086 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:24:57.086 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:24:57.086 Initialization complete. Launching workers. 00:24:57.086 ======================================================== 00:24:57.086 Latency(us) 00:24:57.086 Device Information : IOPS MiB/s Average min max 00:24:57.086 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7995.94 4975.79 10974.92 00:24:57.086 ======================================================== 00:24:57.086 Total : 2016.00 7.88 7995.94 4975.79 10974.92 00:24:57.086 00:24:57.086 22:11:08 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:24:57.086 22:11:08 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:24:57.086 22:11:08 -- host/dma.sh@48 -- # local subsystem=0 00:24:57.086 22:11:08 -- host/dma.sh@50 -- # jq . 00:24:57.086 [2024-07-26 22:11:08.086544] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:57.086 [2024-07-26 22:11:08.086591] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291198 ] 00:24:57.086 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.086 [2024-07-26 22:11:08.166904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:57.086 [2024-07-26 22:11:08.204200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.086 [2024-07-26 22:11:08.204203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.344 [2024-07-26 22:11:08.405418] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:02.612 [2024-07-26 22:11:13.434261] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:02.612 bdev f562e170-67de-43ff-bab8-f0015f510259 reports 1 memory domains 00:25:02.612 bdev f562e170-67de-43ff-bab8-f0015f510259 supports RDMA memory domain 00:25:02.612 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:02.612 ========================================================================== 00:25:02.612 Latency [us] 00:25:02.612 IOPS MiB/s Average min max 00:25:02.612 Core 2: 19230.95 75.12 831.31 15.00 11631.34 00:25:02.612 Core 3: 19664.47 76.81 812.95 10.83 11760.82 00:25:02.612 ========================================================================== 00:25:02.612 Total : 38895.42 151.94 822.03 10.83 11760.82 00:25:02.612 00:25:02.612 Total operations: 194512, translate 194403 pull_push 0 memzero 109 00:25:02.612 22:11:13 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:02.612 22:11:13 -- host/dma.sh@120 -- # nvmftestfini 00:25:02.612 22:11:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:02.612 22:11:13 -- nvmf/common.sh@116 -- # sync 00:25:02.612 22:11:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:02.612 22:11:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:02.612 22:11:13 -- nvmf/common.sh@119 -- # set +e 00:25:02.612 22:11:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:02.612 22:11:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:02.612 rmmod nvme_rdma 00:25:02.612 rmmod nvme_fabrics 00:25:02.612 22:11:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:02.612 22:11:13 -- nvmf/common.sh@123 -- # set -e 00:25:02.612 22:11:13 -- nvmf/common.sh@124 -- # return 0 00:25:02.612 22:11:13 -- nvmf/common.sh@477 -- # '[' -n 2287802 ']' 00:25:02.612 22:11:13 -- nvmf/common.sh@478 -- # killprocess 2287802 00:25:02.612 22:11:13 -- common/autotest_common.sh@926 -- # '[' -z 2287802 ']' 00:25:02.612 22:11:13 -- common/autotest_common.sh@930 -- # kill -0 2287802 00:25:02.612 22:11:13 -- common/autotest_common.sh@931 -- # uname 00:25:02.612 22:11:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:02.612 22:11:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2287802 00:25:02.612 22:11:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:02.612 22:11:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:02.612 22:11:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2287802' 00:25:02.612 killing process with pid 2287802 00:25:02.612 22:11:13 -- common/autotest_common.sh@945 -- # kill 2287802 00:25:02.612 22:11:13 -- common/autotest_common.sh@950 -- # wait 2287802 00:25:02.871 22:11:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:02.871 22:11:14 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:02.871 00:25:02.871 real 0m34.388s 00:25:02.871 user 1m36.618s 00:25:02.871 sys 0m7.460s 00:25:02.871 22:11:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.871 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:02.871 ************************************ 00:25:02.871 END TEST dma 00:25:02.871 ************************************ 00:25:02.871 22:11:14 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:02.871 22:11:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:02.871 22:11:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.871 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:02.871 ************************************ 00:25:02.871 START TEST nvmf_identify 00:25:02.871 ************************************ 00:25:02.871 22:11:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:03.131 * Looking for test storage... 00:25:03.131 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:03.131 22:11:14 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.131 22:11:14 -- nvmf/common.sh@7 -- # uname -s 00:25:03.131 22:11:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.131 22:11:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.131 22:11:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.131 22:11:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.131 22:11:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.131 22:11:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.131 22:11:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.131 22:11:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.131 22:11:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.131 22:11:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.131 22:11:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:03.131 22:11:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:03.131 22:11:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.131 22:11:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.131 22:11:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.131 22:11:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:03.131 22:11:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.131 22:11:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.131 22:11:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.131 22:11:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.131 22:11:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.131 22:11:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.131 22:11:14 -- paths/export.sh@5 -- # export PATH 00:25:03.131 22:11:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.131 22:11:14 -- nvmf/common.sh@46 -- # : 0 00:25:03.131 22:11:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:03.131 22:11:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:03.131 22:11:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:03.131 22:11:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.131 22:11:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.131 22:11:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:03.131 22:11:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:03.131 22:11:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:03.131 22:11:14 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.131 22:11:14 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.131 22:11:14 -- host/identify.sh@14 -- # nvmftestinit 00:25:03.131 22:11:14 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:03.131 22:11:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.131 22:11:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:03.131 22:11:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:03.131 22:11:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:03.131 22:11:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.131 22:11:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.131 22:11:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.131 22:11:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:03.131 22:11:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:03.131 22:11:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:03.131 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:25:11.256 22:11:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.256 22:11:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.256 22:11:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.256 22:11:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.256 22:11:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.256 22:11:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.256 22:11:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.256 22:11:21 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.256 22:11:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.256 22:11:21 -- nvmf/common.sh@295 -- # e810=() 00:25:11.256 22:11:21 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.256 22:11:21 -- nvmf/common.sh@296 -- # x722=() 00:25:11.256 22:11:21 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.256 22:11:21 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.256 22:11:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.256 22:11:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.256 22:11:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.256 22:11:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:11.256 22:11:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:11.256 22:11:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:11.256 22:11:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.256 22:11:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.256 22:11:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:11.256 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:11.256 22:11:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:11.256 22:11:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.256 22:11:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:11.256 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:11.256 22:11:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:11.256 22:11:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.256 22:11:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:11.256 22:11:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.256 22:11:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.256 22:11:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.256 22:11:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.256 22:11:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:11.256 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:11.256 22:11:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.256 22:11:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.256 22:11:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.256 22:11:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.256 22:11:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.256 22:11:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:11.256 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:11.256 22:11:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.256 22:11:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.257 22:11:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.257 22:11:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:11.257 22:11:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:11.257 22:11:21 -- nvmf/common.sh@57 -- # uname 00:25:11.257 22:11:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:11.257 22:11:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:11.257 22:11:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:11.257 22:11:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:11.257 22:11:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:11.257 22:11:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:11.257 22:11:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:11.257 22:11:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:11.257 22:11:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:11.257 22:11:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:11.257 22:11:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:11.257 22:11:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:11.257 22:11:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:11.257 22:11:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:11.257 22:11:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:11.257 22:11:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:11.257 22:11:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@104 -- # continue 2 00:25:11.257 22:11:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@104 -- # continue 2 00:25:11.257 22:11:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:11.257 22:11:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:11.257 22:11:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:11.257 22:11:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:11.257 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:11.257 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:11.257 altname enp217s0f0np0 00:25:11.257 altname ens818f0np0 00:25:11.257 inet 192.168.100.8/24 scope global mlx_0_0 00:25:11.257 valid_lft forever preferred_lft forever 00:25:11.257 22:11:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:11.257 22:11:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:11.257 22:11:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:11.257 22:11:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:11.257 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:11.257 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:11.257 altname enp217s0f1np1 00:25:11.257 altname ens818f1np1 00:25:11.257 inet 192.168.100.9/24 scope global mlx_0_1 00:25:11.257 valid_lft forever preferred_lft forever 00:25:11.257 22:11:21 -- nvmf/common.sh@410 -- # return 0 00:25:11.257 22:11:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.257 22:11:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:11.257 22:11:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:11.257 22:11:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:11.257 22:11:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:11.257 22:11:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:11.257 22:11:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:11.257 22:11:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:11.257 22:11:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:11.257 22:11:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@104 -- # continue 2 00:25:11.257 22:11:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:11.257 22:11:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:11.257 22:11:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@104 -- # continue 2 00:25:11.257 22:11:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:11.257 22:11:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:11.257 22:11:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:11.257 22:11:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:11.257 22:11:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:11.257 22:11:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:11.257 192.168.100.9' 00:25:11.257 22:11:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:11.257 192.168.100.9' 00:25:11.257 22:11:21 -- nvmf/common.sh@445 -- # head -n 1 00:25:11.257 22:11:21 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:11.257 22:11:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:11.257 192.168.100.9' 00:25:11.257 22:11:21 -- nvmf/common.sh@446 -- # tail -n +2 00:25:11.257 22:11:21 -- nvmf/common.sh@446 -- # head -n 1 00:25:11.257 22:11:21 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:11.257 22:11:21 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:11.257 22:11:21 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:11.257 22:11:21 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:11.257 22:11:21 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:11.257 22:11:21 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:11.257 22:11:21 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:11.257 22:11:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.257 22:11:21 -- common/autotest_common.sh@10 -- # set +x 00:25:11.257 22:11:21 -- host/identify.sh@19 -- # nvmfpid=2296164 00:25:11.257 22:11:21 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.257 22:11:21 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:11.257 22:11:21 -- host/identify.sh@23 -- # waitforlisten 2296164 00:25:11.257 22:11:21 -- common/autotest_common.sh@819 -- # '[' -z 2296164 ']' 00:25:11.257 22:11:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.257 22:11:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.257 22:11:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.257 22:11:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.257 22:11:21 -- common/autotest_common.sh@10 -- # set +x 00:25:11.257 [2024-07-26 22:11:21.979169] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:11.257 [2024-07-26 22:11:21.979223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.257 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.257 [2024-07-26 22:11:22.064374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.257 [2024-07-26 22:11:22.104120] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.257 [2024-07-26 22:11:22.104230] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.257 [2024-07-26 22:11:22.104240] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.257 [2024-07-26 22:11:22.104249] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.257 [2024-07-26 22:11:22.104289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.257 [2024-07-26 22:11:22.104386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.257 [2024-07-26 22:11:22.104491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.257 [2024-07-26 22:11:22.104492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.823 22:11:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:11.823 22:11:22 -- common/autotest_common.sh@852 -- # return 0 00:25:11.823 22:11:22 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:11.823 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 [2024-07-26 22:11:22.807295] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f4a4b0/0x1f4e9a0) succeed. 00:25:11.823 [2024-07-26 22:11:22.817668] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f4baa0/0x1f90030) succeed. 00:25:11.823 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.823 22:11:22 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:11.823 22:11:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:11.823 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 22:11:22 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:11.823 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 Malloc0 00:25:11.823 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.823 22:11:23 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:11.823 22:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 22:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.823 22:11:23 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:11.823 22:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 22:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.823 22:11:23 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:11.823 22:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 [2024-07-26 22:11:23.029027] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:11.823 22:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.823 22:11:23 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:11.823 22:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 22:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.823 22:11:23 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:11.823 22:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.823 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:11.823 [2024-07-26 22:11:23.044707] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:12.086 [ 00:25:12.086 { 00:25:12.086 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:12.086 "subtype": "Discovery", 00:25:12.086 "listen_addresses": [ 00:25:12.086 { 00:25:12.086 "transport": "RDMA", 00:25:12.086 "trtype": "RDMA", 00:25:12.086 "adrfam": "IPv4", 00:25:12.086 "traddr": "192.168.100.8", 00:25:12.086 "trsvcid": "4420" 00:25:12.086 } 00:25:12.086 ], 00:25:12.086 "allow_any_host": true, 00:25:12.086 "hosts": [] 00:25:12.086 }, 00:25:12.086 { 00:25:12.086 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.086 "subtype": "NVMe", 00:25:12.086 "listen_addresses": [ 00:25:12.086 { 00:25:12.086 "transport": "RDMA", 00:25:12.086 "trtype": "RDMA", 00:25:12.086 "adrfam": "IPv4", 00:25:12.086 "traddr": "192.168.100.8", 00:25:12.086 "trsvcid": "4420" 00:25:12.086 } 00:25:12.086 ], 00:25:12.086 "allow_any_host": true, 00:25:12.086 "hosts": [], 00:25:12.086 "serial_number": "SPDK00000000000001", 00:25:12.086 "model_number": "SPDK bdev Controller", 00:25:12.086 "max_namespaces": 32, 00:25:12.086 "min_cntlid": 1, 00:25:12.086 "max_cntlid": 65519, 00:25:12.086 "namespaces": [ 00:25:12.086 { 00:25:12.086 "nsid": 1, 00:25:12.086 "bdev_name": "Malloc0", 00:25:12.086 "name": "Malloc0", 00:25:12.086 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:12.086 "eui64": "ABCDEF0123456789", 00:25:12.086 "uuid": "7da70646-569e-4069-8078-21170496f498" 00:25:12.086 } 00:25:12.086 ] 00:25:12.086 } 00:25:12.086 ] 00:25:12.086 22:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.086 22:11:23 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:12.086 [2024-07-26 22:11:23.085980] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:12.086 [2024-07-26 22:11:23.086017] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296243 ] 00:25:12.086 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.086 [2024-07-26 22:11:23.134863] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:12.086 [2024-07-26 22:11:23.134936] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:12.086 [2024-07-26 22:11:23.134953] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:12.086 [2024-07-26 22:11:23.134958] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:12.086 [2024-07-26 22:11:23.134990] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:12.086 [2024-07-26 22:11:23.154160] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:12.086 [2024-07-26 22:11:23.164285] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:12.086 [2024-07-26 22:11:23.164295] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:12.086 [2024-07-26 22:11:23.164302] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164309] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164315] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164322] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164328] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164334] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164340] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164347] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164353] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164359] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164365] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164372] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164378] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164384] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164390] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164400] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164406] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164413] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164419] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164425] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164431] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164438] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164444] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164450] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164457] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164464] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164471] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164478] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164484] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164490] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164497] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164502] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:12.086 [2024-07-26 22:11:23.164510] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:12.086 [2024-07-26 22:11:23.164515] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:12.086 [2024-07-26 22:11:23.164530] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.164543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:12.086 [2024-07-26 22:11:23.169633] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.086 [2024-07-26 22:11:23.169643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:12.086 [2024-07-26 22:11:23.169651] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.169658] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:12.086 [2024-07-26 22:11:23.169665] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:12.086 [2024-07-26 22:11:23.169672] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:12.086 [2024-07-26 22:11:23.169684] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.169692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.086 [2024-07-26 22:11:23.169711] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.086 [2024-07-26 22:11:23.169717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:12.086 [2024-07-26 22:11:23.169725] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:12.086 [2024-07-26 22:11:23.169732] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.169738] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:12.086 [2024-07-26 22:11:23.169746] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.169754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.086 [2024-07-26 22:11:23.169767] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.086 [2024-07-26 22:11:23.169773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:12.086 [2024-07-26 22:11:23.169780] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:12.086 [2024-07-26 22:11:23.169786] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.169793] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:12.086 [2024-07-26 22:11:23.169801] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.086 [2024-07-26 22:11:23.169808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.087 [2024-07-26 22:11:23.169829] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.169841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:12.087 [2024-07-26 22:11:23.169847] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.169856] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.169863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.087 [2024-07-26 22:11:23.169883] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.169888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.169895] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:12.087 [2024-07-26 22:11:23.169901] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:12.087 [2024-07-26 22:11:23.169907] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.169914] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:12.087 [2024-07-26 22:11:23.170020] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:12.087 [2024-07-26 22:11:23.170026] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:12.087 [2024-07-26 22:11:23.170035] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.087 [2024-07-26 22:11:23.170064] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:12.087 [2024-07-26 22:11:23.170082] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170090] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.087 [2024-07-26 22:11:23.170115] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170127] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:12.087 [2024-07-26 22:11:23.170133] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:12.087 [2024-07-26 22:11:23.170139] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170146] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:12.087 [2024-07-26 22:11:23.170155] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:12.087 [2024-07-26 22:11:23.170163] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:12.087 [2024-07-26 22:11:23.170204] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170218] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:12.087 [2024-07-26 22:11:23.170225] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:12.087 [2024-07-26 22:11:23.170230] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:12.087 [2024-07-26 22:11:23.170237] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:12.087 [2024-07-26 22:11:23.170242] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:12.087 [2024-07-26 22:11:23.170248] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:12.087 [2024-07-26 22:11:23.170254] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170264] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:12.087 [2024-07-26 22:11:23.170272] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.087 [2024-07-26 22:11:23.170304] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170318] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.087 [2024-07-26 22:11:23.170332] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.087 [2024-07-26 22:11:23.170346] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.087 [2024-07-26 22:11:23.170360] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.087 [2024-07-26 22:11:23.170373] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:12.087 [2024-07-26 22:11:23.170378] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170389] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:12.087 [2024-07-26 22:11:23.170396] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.087 [2024-07-26 22:11:23.170420] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170432] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:12.087 [2024-07-26 22:11:23.170438] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:12.087 [2024-07-26 22:11:23.170444] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170453] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:12.087 [2024-07-26 22:11:23.170484] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170497] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170507] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:12.087 [2024-07-26 22:11:23.170531] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x184100 00:25:12.087 [2024-07-26 22:11:23.170547] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.087 [2024-07-26 22:11:23.170577] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170594] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184100 00:25:12.087 [2024-07-26 22:11:23.170608] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170614] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:12.087 [2024-07-26 22:11:23.170631] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:12.087 [2024-07-26 22:11:23.170638] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.087 [2024-07-26 22:11:23.170643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:12.088 [2024-07-26 22:11:23.170653] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.088 [2024-07-26 22:11:23.170660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184100 00:25:12.088 [2024-07-26 22:11:23.170666] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:12.088 [2024-07-26 22:11:23.170685] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.088 [2024-07-26 22:11:23.170691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.088 [2024-07-26 22:11:23.170702] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:12.088 ===================================================== 00:25:12.088 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:12.088 ===================================================== 00:25:12.088 Controller Capabilities/Features 00:25:12.088 ================================ 00:25:12.088 Vendor ID: 0000 00:25:12.088 Subsystem Vendor ID: 0000 00:25:12.088 Serial Number: .................... 00:25:12.088 Model Number: ........................................ 00:25:12.088 Firmware Version: 24.01.1 00:25:12.088 Recommended Arb Burst: 0 00:25:12.088 IEEE OUI Identifier: 00 00 00 00:25:12.088 Multi-path I/O 00:25:12.088 May have multiple subsystem ports: No 00:25:12.088 May have multiple controllers: No 00:25:12.088 Associated with SR-IOV VF: No 00:25:12.088 Max Data Transfer Size: 131072 00:25:12.088 Max Number of Namespaces: 0 00:25:12.088 Max Number of I/O Queues: 1024 00:25:12.088 NVMe Specification Version (VS): 1.3 00:25:12.088 NVMe Specification Version (Identify): 1.3 00:25:12.088 Maximum Queue Entries: 128 00:25:12.088 Contiguous Queues Required: Yes 00:25:12.088 Arbitration Mechanisms Supported 00:25:12.088 Weighted Round Robin: Not Supported 00:25:12.088 Vendor Specific: Not Supported 00:25:12.088 Reset Timeout: 15000 ms 00:25:12.088 Doorbell Stride: 4 bytes 00:25:12.088 NVM Subsystem Reset: Not Supported 00:25:12.088 Command Sets Supported 00:25:12.088 NVM Command Set: Supported 00:25:12.088 Boot Partition: Not Supported 00:25:12.088 Memory Page Size Minimum: 4096 bytes 00:25:12.088 Memory Page Size Maximum: 4096 bytes 00:25:12.088 Persistent Memory Region: Not Supported 00:25:12.088 Optional Asynchronous Events Supported 00:25:12.088 Namespace Attribute Notices: Not Supported 00:25:12.088 Firmware Activation Notices: Not Supported 00:25:12.088 ANA Change Notices: Not Supported 00:25:12.088 PLE Aggregate Log Change Notices: Not Supported 00:25:12.088 LBA Status Info Alert Notices: Not Supported 00:25:12.088 EGE Aggregate Log Change Notices: Not Supported 00:25:12.088 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.088 Zone Descriptor Change Notices: Not Supported 00:25:12.088 Discovery Log Change Notices: Supported 00:25:12.088 Controller Attributes 00:25:12.088 128-bit Host Identifier: Not Supported 00:25:12.088 Non-Operational Permissive Mode: Not Supported 00:25:12.088 NVM Sets: Not Supported 00:25:12.088 Read Recovery Levels: Not Supported 00:25:12.088 Endurance Groups: Not Supported 00:25:12.088 Predictable Latency Mode: Not Supported 00:25:12.088 Traffic Based Keep ALive: Not Supported 00:25:12.088 Namespace Granularity: Not Supported 00:25:12.088 SQ Associations: Not Supported 00:25:12.088 UUID List: Not Supported 00:25:12.088 Multi-Domain Subsystem: Not Supported 00:25:12.088 Fixed Capacity Management: Not Supported 00:25:12.088 Variable Capacity Management: Not Supported 00:25:12.088 Delete Endurance Group: Not Supported 00:25:12.088 Delete NVM Set: Not Supported 00:25:12.088 Extended LBA Formats Supported: Not Supported 00:25:12.088 Flexible Data Placement Supported: Not Supported 00:25:12.088 00:25:12.088 Controller Memory Buffer Support 00:25:12.088 ================================ 00:25:12.088 Supported: No 00:25:12.088 00:25:12.088 Persistent Memory Region Support 00:25:12.088 ================================ 00:25:12.088 Supported: No 00:25:12.088 00:25:12.088 Admin Command Set Attributes 00:25:12.088 ============================ 00:25:12.088 Security Send/Receive: Not Supported 00:25:12.088 Format NVM: Not Supported 00:25:12.088 Firmware Activate/Download: Not Supported 00:25:12.088 Namespace Management: Not Supported 00:25:12.088 Device Self-Test: Not Supported 00:25:12.088 Directives: Not Supported 00:25:12.088 NVMe-MI: Not Supported 00:25:12.088 Virtualization Management: Not Supported 00:25:12.088 Doorbell Buffer Config: Not Supported 00:25:12.088 Get LBA Status Capability: Not Supported 00:25:12.088 Command & Feature Lockdown Capability: Not Supported 00:25:12.088 Abort Command Limit: 1 00:25:12.088 Async Event Request Limit: 4 00:25:12.088 Number of Firmware Slots: N/A 00:25:12.088 Firmware Slot 1 Read-Only: N/A 00:25:12.088 Firmware Activation Without Reset: N/A 00:25:12.088 Multiple Update Detection Support: N/A 00:25:12.088 Firmware Update Granularity: No Information Provided 00:25:12.088 Per-Namespace SMART Log: No 00:25:12.088 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.088 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:12.088 Command Effects Log Page: Not Supported 00:25:12.088 Get Log Page Extended Data: Supported 00:25:12.088 Telemetry Log Pages: Not Supported 00:25:12.088 Persistent Event Log Pages: Not Supported 00:25:12.088 Supported Log Pages Log Page: May Support 00:25:12.088 Commands Supported & Effects Log Page: Not Supported 00:25:12.088 Feature Identifiers & Effects Log Page:May Support 00:25:12.088 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.088 Data Area 4 for Telemetry Log: Not Supported 00:25:12.088 Error Log Page Entries Supported: 128 00:25:12.088 Keep Alive: Not Supported 00:25:12.088 00:25:12.088 NVM Command Set Attributes 00:25:12.088 ========================== 00:25:12.088 Submission Queue Entry Size 00:25:12.088 Max: 1 00:25:12.088 Min: 1 00:25:12.088 Completion Queue Entry Size 00:25:12.088 Max: 1 00:25:12.088 Min: 1 00:25:12.088 Number of Namespaces: 0 00:25:12.088 Compare Command: Not Supported 00:25:12.088 Write Uncorrectable Command: Not Supported 00:25:12.088 Dataset Management Command: Not Supported 00:25:12.088 Write Zeroes Command: Not Supported 00:25:12.088 Set Features Save Field: Not Supported 00:25:12.088 Reservations: Not Supported 00:25:12.088 Timestamp: Not Supported 00:25:12.088 Copy: Not Supported 00:25:12.088 Volatile Write Cache: Not Present 00:25:12.088 Atomic Write Unit (Normal): 1 00:25:12.088 Atomic Write Unit (PFail): 1 00:25:12.088 Atomic Compare & Write Unit: 1 00:25:12.088 Fused Compare & Write: Supported 00:25:12.088 Scatter-Gather List 00:25:12.088 SGL Command Set: Supported 00:25:12.088 SGL Keyed: Supported 00:25:12.088 SGL Bit Bucket Descriptor: Not Supported 00:25:12.088 SGL Metadata Pointer: Not Supported 00:25:12.088 Oversized SGL: Not Supported 00:25:12.088 SGL Metadata Address: Not Supported 00:25:12.088 SGL Offset: Supported 00:25:12.088 Transport SGL Data Block: Not Supported 00:25:12.088 Replay Protected Memory Block: Not Supported 00:25:12.088 00:25:12.088 Firmware Slot Information 00:25:12.088 ========================= 00:25:12.088 Active slot: 0 00:25:12.088 00:25:12.088 00:25:12.088 Error Log 00:25:12.088 ========= 00:25:12.088 00:25:12.088 Active Namespaces 00:25:12.088 ================= 00:25:12.088 Discovery Log Page 00:25:12.088 ================== 00:25:12.088 Generation Counter: 2 00:25:12.088 Number of Records: 2 00:25:12.088 Record Format: 0 00:25:12.088 00:25:12.088 Discovery Log Entry 0 00:25:12.088 ---------------------- 00:25:12.088 Transport Type: 1 (RDMA) 00:25:12.088 Address Family: 1 (IPv4) 00:25:12.088 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:12.088 Entry Flags: 00:25:12.088 Duplicate Returned Information: 1 00:25:12.088 Explicit Persistent Connection Support for Discovery: 1 00:25:12.088 Transport Requirements: 00:25:12.088 Secure Channel: Not Required 00:25:12.088 Port ID: 0 (0x0000) 00:25:12.088 Controller ID: 65535 (0xffff) 00:25:12.088 Admin Max SQ Size: 128 00:25:12.088 Transport Service Identifier: 4420 00:25:12.088 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:12.088 Transport Address: 192.168.100.8 00:25:12.088 Transport Specific Address Subtype - RDMA 00:25:12.088 RDMA QP Service Type: 1 (Reliable Connected) 00:25:12.088 RDMA Provider Type: 1 (No provider specified) 00:25:12.088 RDMA CM Service: 1 (RDMA_CM) 00:25:12.089 Discovery Log Entry 1 00:25:12.089 ---------------------- 00:25:12.089 Transport Type: 1 (RDMA) 00:25:12.089 Address Family: 1 (IPv4) 00:25:12.089 Subsystem Type: 2 (NVM Subsystem) 00:25:12.089 Entry Flags: 00:25:12.089 Duplicate Returned Information: 0 00:25:12.089 Explicit Persistent Connection Support for Discovery: 0 00:25:12.089 Transport Requirements: 00:25:12.089 Secure Channel: Not Required 00:25:12.089 Port ID: 0 (0x0000) 00:25:12.089 Controller ID: 65535 (0xffff) 00:25:12.089 Admin Max SQ Size: [2024-07-26 22:11:23.170772] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:12.089 [2024-07-26 22:11:23.170782] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58979 doesn't match qid 00:25:12.089 [2024-07-26 22:11:23.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32617 cdw0:5 sqhd:0e28 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.170804] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58979 doesn't match qid 00:25:12.089 [2024-07-26 22:11:23.170812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32617 cdw0:5 sqhd:0e28 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.170818] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58979 doesn't match qid 00:25:12.089 [2024-07-26 22:11:23.170827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32617 cdw0:5 sqhd:0e28 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.170833] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58979 doesn't match qid 00:25:12.089 [2024-07-26 22:11:23.170841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32617 cdw0:5 sqhd:0e28 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.170851] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.170860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.170879] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.170885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.170893] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.170902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.170908] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.170927] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.170939] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:12.089 [2024-07-26 22:11:23.170945] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:12.089 [2024-07-26 22:11:23.170952] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.170961] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.170969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.170991] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.170996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171003] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171012] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171036] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171050] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171059] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171083] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171097] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171107] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171138] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171150] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171160] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171192] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171204] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171213] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171235] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171247] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171256] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171286] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171299] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171308] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171338] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171350] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171359] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171385] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171397] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171405] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171427] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171439] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171448] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171478] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171490] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171499] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.089 [2024-07-26 22:11:23.171529] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.089 [2024-07-26 22:11:23.171534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:12.089 [2024-07-26 22:11:23.171541] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171550] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.089 [2024-07-26 22:11:23.171558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171576] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171588] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171596] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171623] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171643] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171651] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171677] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171689] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171698] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171725] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171737] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171746] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171770] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171782] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171791] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171823] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171834] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171843] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171871] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171883] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171892] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171924] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171935] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.171968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.171974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.171980] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.171988] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.172020] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.172025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.172032] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172040] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.172064] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.172070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.172076] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172085] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.172115] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.172120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.172127] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172135] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.172163] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.172169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.172175] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172184] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.090 [2024-07-26 22:11:23.172210] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.090 [2024-07-26 22:11:23.172215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:12.090 [2024-07-26 22:11:23.172221] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172230] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.090 [2024-07-26 22:11:23.172238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172254] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172266] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172276] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172306] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172318] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172327] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172351] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172362] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172371] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172397] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172409] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172417] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172443] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172455] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172464] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172494] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172505] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172514] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172536] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172548] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172559] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172589] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172601] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172609] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172641] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172653] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172662] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172694] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172705] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172714] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172740] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172752] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172760] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172786] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172798] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172807] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172831] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172844] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172853] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172879] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172891] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172899] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172921] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172933] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172942] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.172968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.172973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.172980] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172988] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.172996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.173012] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.173018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.173024] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.173033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.173040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.091 [2024-07-26 22:11:23.173057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.091 [2024-07-26 22:11:23.173062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:12.091 [2024-07-26 22:11:23.173068] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.173077] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.091 [2024-07-26 22:11:23.173085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173099] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173112] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173121] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173151] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173163] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173171] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173193] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173205] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173214] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173240] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173252] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173260] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173286] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173337] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173349] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173357] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173389] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173403] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173411] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173439] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173451] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173461] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173483] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173494] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173503] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173527] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173539] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173548] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173578] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.173583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.173589] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173598] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.173606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.173620] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.177631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.177639] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.177648] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.177656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.092 [2024-07-26 22:11:23.177680] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.092 [2024-07-26 22:11:23.177688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000a p:0 m:0 dnr:0 00:25:12.092 [2024-07-26 22:11:23.177694] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.092 [2024-07-26 22:11:23.177701] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:12.092 128 00:25:12.092 Transport Service Identifier: 4420 00:25:12.092 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:12.092 Transport Address: 192.168.100.8 00:25:12.092 Transport Specific Address Subtype - RDMA 00:25:12.092 RDMA QP Service Type: 1 (Reliable Connected) 00:25:12.092 RDMA Provider Type: 1 (No provider specified) 00:25:12.092 RDMA CM Service: 1 (RDMA_CM) 00:25:12.092 22:11:23 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:12.092 [2024-07-26 22:11:23.246988] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:12.092 [2024-07-26 22:11:23.247024] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2296317 ] 00:25:12.092 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.092 [2024-07-26 22:11:23.292821] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:12.092 [2024-07-26 22:11:23.292883] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:12.092 [2024-07-26 22:11:23.292905] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:12.092 [2024-07-26 22:11:23.292911] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:12.092 [2024-07-26 22:11:23.292937] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:12.092 [2024-07-26 22:11:23.303114] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:12.352 [2024-07-26 22:11:23.313185] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:12.352 [2024-07-26 22:11:23.313196] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:12.352 [2024-07-26 22:11:23.313203] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313211] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313217] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313224] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313231] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313237] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313244] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313251] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313257] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313263] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313272] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313278] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313285] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313291] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313297] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313303] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:12.352 [2024-07-26 22:11:23.313310] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313316] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313322] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313328] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313335] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313341] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313347] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313353] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313360] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313366] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313372] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313378] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313385] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313391] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313397] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313403] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:12.353 [2024-07-26 22:11:23.313408] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:12.353 [2024-07-26 22:11:23.313413] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:12.353 [2024-07-26 22:11:23.313428] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.313439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x184100 00:25:12.353 [2024-07-26 22:11:23.318633] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.318642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.318650] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318659] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:12.353 [2024-07-26 22:11:23.318666] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:12.353 [2024-07-26 22:11:23.318672] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:12.353 [2024-07-26 22:11:23.318685] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.318719] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.318724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.318731] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:12.353 [2024-07-26 22:11:23.318737] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318744] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:12.353 [2024-07-26 22:11:23.318752] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.318784] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.318790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.318796] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:12.353 [2024-07-26 22:11:23.318802] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318809] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:12.353 [2024-07-26 22:11:23.318817] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.318847] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.318852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.318859] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:12.353 [2024-07-26 22:11:23.318865] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318873] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.318897] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.318903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.318909] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:12.353 [2024-07-26 22:11:23.318915] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:12.353 [2024-07-26 22:11:23.318921] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.318928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:12.353 [2024-07-26 22:11:23.319036] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:12.353 [2024-07-26 22:11:23.319041] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:12.353 [2024-07-26 22:11:23.319050] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.319071] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:12.353 [2024-07-26 22:11:23.319089] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319098] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.319123] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319135] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:12.353 [2024-07-26 22:11:23.319141] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319147] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319154] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:12.353 [2024-07-26 22:11:23.319162] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319171] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:12.353 [2024-07-26 22:11:23.319220] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319235] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:12.353 [2024-07-26 22:11:23.319241] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:12.353 [2024-07-26 22:11:23.319246] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:12.353 [2024-07-26 22:11:23.319252] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:12.353 [2024-07-26 22:11:23.319257] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:12.353 [2024-07-26 22:11:23.319263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319271] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319288] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.319314] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319327] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.353 [2024-07-26 22:11:23.319342] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.353 [2024-07-26 22:11:23.319355] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.353 [2024-07-26 22:11:23.319369] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.353 [2024-07-26 22:11:23.319382] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319388] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319398] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319406] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319413] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.319436] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319447] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:12.353 [2024-07-26 22:11:23.319454] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319460] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319467] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319476] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319483] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.353 [2024-07-26 22:11:23.319510] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319564] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319571] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319578] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319586] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184100 00:25:12.353 [2024-07-26 22:11:23.319622] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319648] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:12.353 [2024-07-26 22:11:23.319661] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319667] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319675] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319683] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:12.353 [2024-07-26 22:11:23.319718] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319742] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319750] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319758] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184100 00:25:12.353 [2024-07-26 22:11:23.319789] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.353 [2024-07-26 22:11:23.319795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:12.353 [2024-07-26 22:11:23.319803] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319809] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:12.353 [2024-07-26 22:11:23.319818] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319827] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319840] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319846] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:12.353 [2024-07-26 22:11:23.319852] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:12.353 [2024-07-26 22:11:23.319858] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:12.354 [2024-07-26 22:11:23.319872] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319879] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.354 [2024-07-26 22:11:23.319887] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.354 [2024-07-26 22:11:23.319905] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.319910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.319917] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319923] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.319928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.319935] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319944] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319951] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.354 [2024-07-26 22:11:23.319968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.319974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.319980] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319989] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.319996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.354 [2024-07-26 22:11:23.320020] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320032] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320040] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.354 [2024-07-26 22:11:23.320075] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320087] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320098] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184100 00:25:12.354 [2024-07-26 22:11:23.320115] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x184100 00:25:12.354 [2024-07-26 22:11:23.320132] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184100 00:25:12.354 [2024-07-26 22:11:23.320147] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x184100 00:25:12.354 [2024-07-26 22:11:23.320164] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320182] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320188] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320203] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320209] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320222] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320228] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320244] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:12.354 ===================================================== 00:25:12.354 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.354 ===================================================== 00:25:12.354 Controller Capabilities/Features 00:25:12.354 ================================ 00:25:12.354 Vendor ID: 8086 00:25:12.354 Subsystem Vendor ID: 8086 00:25:12.354 Serial Number: SPDK00000000000001 00:25:12.354 Model Number: SPDK bdev Controller 00:25:12.354 Firmware Version: 24.01.1 00:25:12.354 Recommended Arb Burst: 6 00:25:12.354 IEEE OUI Identifier: e4 d2 5c 00:25:12.354 Multi-path I/O 00:25:12.354 May have multiple subsystem ports: Yes 00:25:12.354 May have multiple controllers: Yes 00:25:12.354 Associated with SR-IOV VF: No 00:25:12.354 Max Data Transfer Size: 131072 00:25:12.354 Max Number of Namespaces: 32 00:25:12.354 Max Number of I/O Queues: 127 00:25:12.354 NVMe Specification Version (VS): 1.3 00:25:12.354 NVMe Specification Version (Identify): 1.3 00:25:12.354 Maximum Queue Entries: 128 00:25:12.354 Contiguous Queues Required: Yes 00:25:12.354 Arbitration Mechanisms Supported 00:25:12.354 Weighted Round Robin: Not Supported 00:25:12.354 Vendor Specific: Not Supported 00:25:12.354 Reset Timeout: 15000 ms 00:25:12.354 Doorbell Stride: 4 bytes 00:25:12.354 NVM Subsystem Reset: Not Supported 00:25:12.354 Command Sets Supported 00:25:12.354 NVM Command Set: Supported 00:25:12.354 Boot Partition: Not Supported 00:25:12.354 Memory Page Size Minimum: 4096 bytes 00:25:12.354 Memory Page Size Maximum: 4096 bytes 00:25:12.354 Persistent Memory Region: Not Supported 00:25:12.354 Optional Asynchronous Events Supported 00:25:12.354 Namespace Attribute Notices: Supported 00:25:12.354 Firmware Activation Notices: Not Supported 00:25:12.354 ANA Change Notices: Not Supported 00:25:12.354 PLE Aggregate Log Change Notices: Not Supported 00:25:12.354 LBA Status Info Alert Notices: Not Supported 00:25:12.354 EGE Aggregate Log Change Notices: Not Supported 00:25:12.354 Normal NVM Subsystem Shutdown event: Not Supported 00:25:12.354 Zone Descriptor Change Notices: Not Supported 00:25:12.354 Discovery Log Change Notices: Not Supported 00:25:12.354 Controller Attributes 00:25:12.354 128-bit Host Identifier: Supported 00:25:12.354 Non-Operational Permissive Mode: Not Supported 00:25:12.354 NVM Sets: Not Supported 00:25:12.354 Read Recovery Levels: Not Supported 00:25:12.354 Endurance Groups: Not Supported 00:25:12.354 Predictable Latency Mode: Not Supported 00:25:12.354 Traffic Based Keep ALive: Not Supported 00:25:12.354 Namespace Granularity: Not Supported 00:25:12.354 SQ Associations: Not Supported 00:25:12.354 UUID List: Not Supported 00:25:12.354 Multi-Domain Subsystem: Not Supported 00:25:12.354 Fixed Capacity Management: Not Supported 00:25:12.354 Variable Capacity Management: Not Supported 00:25:12.354 Delete Endurance Group: Not Supported 00:25:12.354 Delete NVM Set: Not Supported 00:25:12.354 Extended LBA Formats Supported: Not Supported 00:25:12.354 Flexible Data Placement Supported: Not Supported 00:25:12.354 00:25:12.354 Controller Memory Buffer Support 00:25:12.354 ================================ 00:25:12.354 Supported: No 00:25:12.354 00:25:12.354 Persistent Memory Region Support 00:25:12.354 ================================ 00:25:12.354 Supported: No 00:25:12.354 00:25:12.354 Admin Command Set Attributes 00:25:12.354 ============================ 00:25:12.354 Security Send/Receive: Not Supported 00:25:12.354 Format NVM: Not Supported 00:25:12.354 Firmware Activate/Download: Not Supported 00:25:12.354 Namespace Management: Not Supported 00:25:12.354 Device Self-Test: Not Supported 00:25:12.354 Directives: Not Supported 00:25:12.354 NVMe-MI: Not Supported 00:25:12.354 Virtualization Management: Not Supported 00:25:12.354 Doorbell Buffer Config: Not Supported 00:25:12.354 Get LBA Status Capability: Not Supported 00:25:12.354 Command & Feature Lockdown Capability: Not Supported 00:25:12.354 Abort Command Limit: 4 00:25:12.354 Async Event Request Limit: 4 00:25:12.354 Number of Firmware Slots: N/A 00:25:12.354 Firmware Slot 1 Read-Only: N/A 00:25:12.354 Firmware Activation Without Reset: N/A 00:25:12.354 Multiple Update Detection Support: N/A 00:25:12.354 Firmware Update Granularity: No Information Provided 00:25:12.354 Per-Namespace SMART Log: No 00:25:12.354 Asymmetric Namespace Access Log Page: Not Supported 00:25:12.354 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:12.354 Command Effects Log Page: Supported 00:25:12.354 Get Log Page Extended Data: Supported 00:25:12.354 Telemetry Log Pages: Not Supported 00:25:12.354 Persistent Event Log Pages: Not Supported 00:25:12.354 Supported Log Pages Log Page: May Support 00:25:12.354 Commands Supported & Effects Log Page: Not Supported 00:25:12.354 Feature Identifiers & Effects Log Page:May Support 00:25:12.354 NVMe-MI Commands & Effects Log Page: May Support 00:25:12.354 Data Area 4 for Telemetry Log: Not Supported 00:25:12.354 Error Log Page Entries Supported: 128 00:25:12.354 Keep Alive: Supported 00:25:12.354 Keep Alive Granularity: 10000 ms 00:25:12.354 00:25:12.354 NVM Command Set Attributes 00:25:12.354 ========================== 00:25:12.354 Submission Queue Entry Size 00:25:12.354 Max: 64 00:25:12.354 Min: 64 00:25:12.354 Completion Queue Entry Size 00:25:12.354 Max: 16 00:25:12.354 Min: 16 00:25:12.354 Number of Namespaces: 32 00:25:12.354 Compare Command: Supported 00:25:12.354 Write Uncorrectable Command: Not Supported 00:25:12.354 Dataset Management Command: Supported 00:25:12.354 Write Zeroes Command: Supported 00:25:12.354 Set Features Save Field: Not Supported 00:25:12.354 Reservations: Supported 00:25:12.354 Timestamp: Not Supported 00:25:12.354 Copy: Supported 00:25:12.354 Volatile Write Cache: Present 00:25:12.354 Atomic Write Unit (Normal): 1 00:25:12.354 Atomic Write Unit (PFail): 1 00:25:12.354 Atomic Compare & Write Unit: 1 00:25:12.354 Fused Compare & Write: Supported 00:25:12.354 Scatter-Gather List 00:25:12.354 SGL Command Set: Supported 00:25:12.354 SGL Keyed: Supported 00:25:12.354 SGL Bit Bucket Descriptor: Not Supported 00:25:12.354 SGL Metadata Pointer: Not Supported 00:25:12.354 Oversized SGL: Not Supported 00:25:12.354 SGL Metadata Address: Not Supported 00:25:12.354 SGL Offset: Supported 00:25:12.354 Transport SGL Data Block: Not Supported 00:25:12.354 Replay Protected Memory Block: Not Supported 00:25:12.354 00:25:12.354 Firmware Slot Information 00:25:12.354 ========================= 00:25:12.354 Active slot: 1 00:25:12.354 Slot 1 Firmware Revision: 24.01.1 00:25:12.354 00:25:12.354 00:25:12.354 Commands Supported and Effects 00:25:12.354 ============================== 00:25:12.354 Admin Commands 00:25:12.354 -------------- 00:25:12.354 Get Log Page (02h): Supported 00:25:12.354 Identify (06h): Supported 00:25:12.354 Abort (08h): Supported 00:25:12.354 Set Features (09h): Supported 00:25:12.354 Get Features (0Ah): Supported 00:25:12.354 Asynchronous Event Request (0Ch): Supported 00:25:12.354 Keep Alive (18h): Supported 00:25:12.354 I/O Commands 00:25:12.354 ------------ 00:25:12.354 Flush (00h): Supported LBA-Change 00:25:12.354 Write (01h): Supported LBA-Change 00:25:12.354 Read (02h): Supported 00:25:12.354 Compare (05h): Supported 00:25:12.354 Write Zeroes (08h): Supported LBA-Change 00:25:12.354 Dataset Management (09h): Supported LBA-Change 00:25:12.354 Copy (19h): Supported LBA-Change 00:25:12.354 Unknown (79h): Supported LBA-Change 00:25:12.354 Unknown (7Ah): Supported 00:25:12.354 00:25:12.354 Error Log 00:25:12.354 ========= 00:25:12.354 00:25:12.354 Arbitration 00:25:12.354 =========== 00:25:12.354 Arbitration Burst: 1 00:25:12.354 00:25:12.354 Power Management 00:25:12.354 ================ 00:25:12.354 Number of Power States: 1 00:25:12.354 Current Power State: Power State #0 00:25:12.354 Power State #0: 00:25:12.354 Max Power: 0.00 W 00:25:12.354 Non-Operational State: Operational 00:25:12.354 Entry Latency: Not Reported 00:25:12.354 Exit Latency: Not Reported 00:25:12.354 Relative Read Throughput: 0 00:25:12.354 Relative Read Latency: 0 00:25:12.354 Relative Write Throughput: 0 00:25:12.354 Relative Write Latency: 0 00:25:12.354 Idle Power: Not Reported 00:25:12.354 Active Power: Not Reported 00:25:12.354 Non-Operational Permissive Mode: Not Supported 00:25:12.354 00:25:12.354 Health Information 00:25:12.354 ================== 00:25:12.354 Critical Warnings: 00:25:12.354 Available Spare Space: OK 00:25:12.354 Temperature: OK 00:25:12.354 Device Reliability: OK 00:25:12.354 Read Only: No 00:25:12.354 Volatile Memory Backup: OK 00:25:12.354 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:12.354 Temperature Threshol[2024-07-26 22:11:23.320326] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320334] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.354 [2024-07-26 22:11:23.320358] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.354 [2024-07-26 22:11:23.320364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320371] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:12.354 [2024-07-26 22:11:23.320394] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:12.354 [2024-07-26 22:11:23.320404] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11088 doesn't match qid 00:25:12.354 [2024-07-26 22:11:23.320418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32606 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:12.354 [2024-07-26 22:11:23.320425] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11088 doesn't match qid 00:25:12.355 [2024-07-26 22:11:23.320433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32606 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320440] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11088 doesn't match qid 00:25:12.355 [2024-07-26 22:11:23.320448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32606 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320454] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 11088 doesn't match qid 00:25:12.355 [2024-07-26 22:11:23.320462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32606 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320471] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320495] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320510] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320524] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320544] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320556] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:12.355 [2024-07-26 22:11:23.320562] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:12.355 [2024-07-26 22:11:23.320569] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320578] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320600] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320612] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320620] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320649] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320662] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320670] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320701] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320714] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320723] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320754] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320769] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320777] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320807] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320821] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320830] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320863] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320876] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320885] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320913] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320926] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320935] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.320960] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.320966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.320972] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320981] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.320989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321011] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321024] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321033] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321058] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321071] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321080] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321105] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321117] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321126] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321151] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321163] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321172] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321198] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321210] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321219] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321244] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321257] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321266] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321290] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321302] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321310] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321334] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321346] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321355] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321380] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321392] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321401] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321428] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321440] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321449] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321480] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321492] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321501] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321532] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321543] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321552] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321580] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321591] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321600] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321622] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321638] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321647] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.355 [2024-07-26 22:11:23.321674] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.355 [2024-07-26 22:11:23.321680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:12.355 [2024-07-26 22:11:23.321686] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x184100 00:25:12.355 [2024-07-26 22:11:23.321695] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.321718] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.321724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.321730] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321739] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.321763] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.321768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.321774] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321784] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.321812] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.321818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.321824] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321833] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.321860] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.321866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.321872] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321881] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.321902] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.321908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.321914] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321923] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.321952] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.321958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.321964] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321973] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.321980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.322000] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.322006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.322012] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.322021] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.322028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.322042] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.322048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.322056] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.322064] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.322072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.322092] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.322097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.322104] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.322113] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.322120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.322136] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.322142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.322148] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.325637] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.325647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:12.356 [2024-07-26 22:11:23.325671] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:12.356 [2024-07-26 22:11:23.325677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001c p:0 m:0 dnr:0 00:25:12.356 [2024-07-26 22:11:23.325684] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x184100 00:25:12.356 [2024-07-26 22:11:23.325691] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:12.356 d: 0 Kelvin (-273 Celsius) 00:25:12.356 Available Spare: 0% 00:25:12.356 Available Spare Threshold: 0% 00:25:12.356 Life Percentage Used: 0% 00:25:12.356 Data Units Read: 0 00:25:12.356 Data Units Written: 0 00:25:12.356 Host Read Commands: 0 00:25:12.356 Host Write Commands: 0 00:25:12.356 Controller Busy Time: 0 minutes 00:25:12.356 Power Cycles: 0 00:25:12.356 Power On Hours: 0 hours 00:25:12.356 Unsafe Shutdowns: 0 00:25:12.356 Unrecoverable Media Errors: 0 00:25:12.356 Lifetime Error Log Entries: 0 00:25:12.356 Warning Temperature Time: 0 minutes 00:25:12.356 Critical Temperature Time: 0 minutes 00:25:12.356 00:25:12.356 Number of Queues 00:25:12.356 ================ 00:25:12.356 Number of I/O Submission Queues: 127 00:25:12.356 Number of I/O Completion Queues: 127 00:25:12.356 00:25:12.356 Active Namespaces 00:25:12.356 ================= 00:25:12.356 Namespace ID:1 00:25:12.356 Error Recovery Timeout: Unlimited 00:25:12.356 Command Set Identifier: NVM (00h) 00:25:12.356 Deallocate: Supported 00:25:12.356 Deallocated/Unwritten Error: Not Supported 00:25:12.356 Deallocated Read Value: Unknown 00:25:12.356 Deallocate in Write Zeroes: Not Supported 00:25:12.356 Deallocated Guard Field: 0xFFFF 00:25:12.356 Flush: Supported 00:25:12.356 Reservation: Supported 00:25:12.356 Namespace Sharing Capabilities: Multiple Controllers 00:25:12.356 Size (in LBAs): 131072 (0GiB) 00:25:12.356 Capacity (in LBAs): 131072 (0GiB) 00:25:12.356 Utilization (in LBAs): 131072 (0GiB) 00:25:12.356 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:12.356 EUI64: ABCDEF0123456789 00:25:12.356 UUID: 7da70646-569e-4069-8078-21170496f498 00:25:12.356 Thin Provisioning: Not Supported 00:25:12.356 Per-NS Atomic Units: Yes 00:25:12.356 Atomic Boundary Size (Normal): 0 00:25:12.356 Atomic Boundary Size (PFail): 0 00:25:12.356 Atomic Boundary Offset: 0 00:25:12.356 Maximum Single Source Range Length: 65535 00:25:12.356 Maximum Copy Length: 65535 00:25:12.356 Maximum Source Range Count: 1 00:25:12.356 NGUID/EUI64 Never Reused: No 00:25:12.356 Namespace Write Protected: No 00:25:12.356 Number of LBA Formats: 1 00:25:12.356 Current LBA Format: LBA Format #00 00:25:12.356 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:12.356 00:25:12.356 22:11:23 -- host/identify.sh@51 -- # sync 00:25:12.356 22:11:23 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.356 22:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.356 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:12.356 22:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.356 22:11:23 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:12.356 22:11:23 -- host/identify.sh@56 -- # nvmftestfini 00:25:12.356 22:11:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:12.356 22:11:23 -- nvmf/common.sh@116 -- # sync 00:25:12.356 22:11:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:12.356 22:11:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:12.356 22:11:23 -- nvmf/common.sh@119 -- # set +e 00:25:12.356 22:11:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:12.356 22:11:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:12.356 rmmod nvme_rdma 00:25:12.356 rmmod nvme_fabrics 00:25:12.356 22:11:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:12.356 22:11:23 -- nvmf/common.sh@123 -- # set -e 00:25:12.356 22:11:23 -- nvmf/common.sh@124 -- # return 0 00:25:12.356 22:11:23 -- nvmf/common.sh@477 -- # '[' -n 2296164 ']' 00:25:12.356 22:11:23 -- nvmf/common.sh@478 -- # killprocess 2296164 00:25:12.356 22:11:23 -- common/autotest_common.sh@926 -- # '[' -z 2296164 ']' 00:25:12.356 22:11:23 -- common/autotest_common.sh@930 -- # kill -0 2296164 00:25:12.356 22:11:23 -- common/autotest_common.sh@931 -- # uname 00:25:12.356 22:11:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:12.356 22:11:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2296164 00:25:12.356 22:11:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:12.356 22:11:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:12.356 22:11:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2296164' 00:25:12.356 killing process with pid 2296164 00:25:12.356 22:11:23 -- common/autotest_common.sh@945 -- # kill 2296164 00:25:12.356 [2024-07-26 22:11:23.491147] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:12.356 22:11:23 -- common/autotest_common.sh@950 -- # wait 2296164 00:25:12.614 22:11:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:12.614 22:11:23 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:12.614 00:25:12.614 real 0m9.665s 00:25:12.614 user 0m8.607s 00:25:12.614 sys 0m6.387s 00:25:12.614 22:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:12.614 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:12.614 ************************************ 00:25:12.614 END TEST nvmf_identify 00:25:12.614 ************************************ 00:25:12.614 22:11:23 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:12.614 22:11:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:12.614 22:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:12.614 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:12.614 ************************************ 00:25:12.614 START TEST nvmf_perf 00:25:12.614 ************************************ 00:25:12.614 22:11:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:12.872 * Looking for test storage... 00:25:12.872 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:12.872 22:11:23 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.872 22:11:23 -- nvmf/common.sh@7 -- # uname -s 00:25:12.872 22:11:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.872 22:11:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.872 22:11:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.872 22:11:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.873 22:11:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.873 22:11:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.873 22:11:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.873 22:11:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.873 22:11:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.873 22:11:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.873 22:11:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:12.873 22:11:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:12.873 22:11:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.873 22:11:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.873 22:11:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.873 22:11:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:12.873 22:11:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.873 22:11:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.873 22:11:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.873 22:11:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.873 22:11:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.873 22:11:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.873 22:11:23 -- paths/export.sh@5 -- # export PATH 00:25:12.873 22:11:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.873 22:11:23 -- nvmf/common.sh@46 -- # : 0 00:25:12.873 22:11:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:12.873 22:11:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:12.873 22:11:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:12.873 22:11:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.873 22:11:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.873 22:11:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:12.873 22:11:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:12.873 22:11:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:12.873 22:11:23 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:12.873 22:11:23 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:12.873 22:11:23 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:12.873 22:11:23 -- host/perf.sh@17 -- # nvmftestinit 00:25:12.873 22:11:23 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:12.873 22:11:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.873 22:11:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:12.873 22:11:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:12.873 22:11:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:12.873 22:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.873 22:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:12.873 22:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.873 22:11:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:12.873 22:11:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:12.873 22:11:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:12.873 22:11:23 -- common/autotest_common.sh@10 -- # set +x 00:25:21.066 22:11:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.066 22:11:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:21.066 22:11:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:21.066 22:11:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:21.066 22:11:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:21.066 22:11:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:21.066 22:11:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:21.066 22:11:31 -- nvmf/common.sh@294 -- # net_devs=() 00:25:21.066 22:11:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:21.066 22:11:31 -- nvmf/common.sh@295 -- # e810=() 00:25:21.066 22:11:31 -- nvmf/common.sh@295 -- # local -ga e810 00:25:21.066 22:11:31 -- nvmf/common.sh@296 -- # x722=() 00:25:21.066 22:11:31 -- nvmf/common.sh@296 -- # local -ga x722 00:25:21.066 22:11:31 -- nvmf/common.sh@297 -- # mlx=() 00:25:21.066 22:11:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:21.066 22:11:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.067 22:11:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:21.067 22:11:31 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:21.067 22:11:31 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:21.067 22:11:31 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:21.067 22:11:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:21.067 22:11:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:21.067 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:21.067 22:11:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:21.067 22:11:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:21.067 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:21.067 22:11:31 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:21.067 22:11:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:21.067 22:11:31 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.067 22:11:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.067 22:11:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.067 22:11:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:21.067 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:21.067 22:11:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.067 22:11:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.067 22:11:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.067 22:11:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.067 22:11:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:21.067 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:21.067 22:11:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.067 22:11:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:21.067 22:11:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:21.067 22:11:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:21.067 22:11:31 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:21.067 22:11:31 -- nvmf/common.sh@57 -- # uname 00:25:21.067 22:11:31 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:21.067 22:11:31 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:21.067 22:11:31 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:21.067 22:11:31 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:21.067 22:11:31 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:21.067 22:11:31 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:21.067 22:11:31 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:21.067 22:11:31 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:21.067 22:11:31 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:21.067 22:11:31 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:21.067 22:11:31 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:21.067 22:11:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:21.067 22:11:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:21.067 22:11:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:21.067 22:11:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:21.067 22:11:31 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:21.067 22:11:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:21.067 22:11:31 -- nvmf/common.sh@104 -- # continue 2 00:25:21.067 22:11:31 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.067 22:11:31 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:21.067 22:11:31 -- nvmf/common.sh@104 -- # continue 2 00:25:21.067 22:11:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:21.067 22:11:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:21.067 22:11:31 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:21.067 22:11:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:21.067 22:11:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.067 22:11:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.067 22:11:31 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:21.067 22:11:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:21.067 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:21.067 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:21.067 altname enp217s0f0np0 00:25:21.067 altname ens818f0np0 00:25:21.067 inet 192.168.100.8/24 scope global mlx_0_0 00:25:21.067 valid_lft forever preferred_lft forever 00:25:21.067 22:11:31 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:21.067 22:11:31 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:21.067 22:11:31 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:21.067 22:11:31 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:21.067 22:11:31 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.067 22:11:31 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.067 22:11:31 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:21.067 22:11:31 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:21.067 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:21.067 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:21.067 altname enp217s0f1np1 00:25:21.067 altname ens818f1np1 00:25:21.067 inet 192.168.100.9/24 scope global mlx_0_1 00:25:21.067 valid_lft forever preferred_lft forever 00:25:21.067 22:11:31 -- nvmf/common.sh@410 -- # return 0 00:25:21.067 22:11:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:21.067 22:11:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:21.067 22:11:31 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:21.067 22:11:31 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:21.067 22:11:31 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:21.067 22:11:31 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:21.067 22:11:31 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:21.067 22:11:31 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:21.067 22:11:31 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:21.067 22:11:32 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:21.067 22:11:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.067 22:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.067 22:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:21.067 22:11:32 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:21.067 22:11:32 -- nvmf/common.sh@104 -- # continue 2 00:25:21.067 22:11:32 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.067 22:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.067 22:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:21.067 22:11:32 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.067 22:11:32 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:21.067 22:11:32 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:21.067 22:11:32 -- nvmf/common.sh@104 -- # continue 2 00:25:21.067 22:11:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:21.067 22:11:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:21.067 22:11:32 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:21.067 22:11:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:21.067 22:11:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.067 22:11:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.068 22:11:32 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:21.068 22:11:32 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:21.068 22:11:32 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:21.068 22:11:32 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:21.068 22:11:32 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.068 22:11:32 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.068 22:11:32 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:21.068 192.168.100.9' 00:25:21.068 22:11:32 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:21.068 192.168.100.9' 00:25:21.068 22:11:32 -- nvmf/common.sh@445 -- # head -n 1 00:25:21.068 22:11:32 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:21.068 22:11:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:21.068 192.168.100.9' 00:25:21.068 22:11:32 -- nvmf/common.sh@446 -- # tail -n +2 00:25:21.068 22:11:32 -- nvmf/common.sh@446 -- # head -n 1 00:25:21.068 22:11:32 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:21.068 22:11:32 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:21.068 22:11:32 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:21.068 22:11:32 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:21.068 22:11:32 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:21.068 22:11:32 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:21.068 22:11:32 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:21.068 22:11:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.068 22:11:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:21.068 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:25:21.068 22:11:32 -- nvmf/common.sh@469 -- # nvmfpid=2300411 00:25:21.068 22:11:32 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.068 22:11:32 -- nvmf/common.sh@470 -- # waitforlisten 2300411 00:25:21.068 22:11:32 -- common/autotest_common.sh@819 -- # '[' -z 2300411 ']' 00:25:21.068 22:11:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.068 22:11:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.068 22:11:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.068 22:11:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.068 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:25:21.068 [2024-07-26 22:11:32.129494] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:21.068 [2024-07-26 22:11:32.129544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.068 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.068 [2024-07-26 22:11:32.214153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.068 [2024-07-26 22:11:32.253224] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:21.068 [2024-07-26 22:11:32.253328] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.068 [2024-07-26 22:11:32.253338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.068 [2024-07-26 22:11:32.253348] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.068 [2024-07-26 22:11:32.253401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.068 [2024-07-26 22:11:32.253514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.068 [2024-07-26 22:11:32.253598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.068 [2024-07-26 22:11:32.253600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.006 22:11:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.006 22:11:32 -- common/autotest_common.sh@852 -- # return 0 00:25:22.006 22:11:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:22.006 22:11:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:22.006 22:11:32 -- common/autotest_common.sh@10 -- # set +x 00:25:22.006 22:11:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.006 22:11:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.006 22:11:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:25.293 22:11:36 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:25.293 22:11:36 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:25.293 22:11:36 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:25.293 22:11:36 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:25.293 22:11:36 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:25.293 22:11:36 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:25.293 22:11:36 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:25.293 22:11:36 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:25:25.293 22:11:36 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:25.552 [2024-07-26 22:11:36.553885] rdma.c:2778:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:25.552 [2024-07-26 22:11:36.575447] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17aa920/0x17b8dc0) succeed. 00:25:25.552 [2024-07-26 22:11:36.585867] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17abf10/0x1858ec0) succeed. 00:25:25.552 22:11:36 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.811 22:11:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.811 22:11:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.070 22:11:37 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:26.070 22:11:37 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:26.070 22:11:37 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:26.329 [2024-07-26 22:11:37.397244] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:26.329 22:11:37 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:26.588 22:11:37 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:26.588 22:11:37 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:26.588 22:11:37 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:26.588 22:11:37 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:27.967 Initializing NVMe Controllers 00:25:27.967 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:27.967 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:27.967 Initialization complete. Launching workers. 00:25:27.967 ======================================================== 00:25:27.967 Latency(us) 00:25:27.967 Device Information : IOPS MiB/s Average min max 00:25:27.967 PCIE (0000:d8:00.0) NSID 1 from core 0: 103593.67 404.66 308.34 9.04 4230.87 00:25:27.967 ======================================================== 00:25:27.967 Total : 103593.67 404.66 308.34 9.04 4230.87 00:25:27.967 00:25:27.967 22:11:38 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:27.967 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.255 Initializing NVMe Controllers 00:25:31.255 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.255 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:31.255 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:31.255 Initialization complete. Launching workers. 00:25:31.255 ======================================================== 00:25:31.255 Latency(us) 00:25:31.255 Device Information : IOPS MiB/s Average min max 00:25:31.255 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6775.00 26.46 147.40 47.27 4096.69 00:25:31.255 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5222.00 20.40 191.31 65.55 4109.37 00:25:31.255 ======================================================== 00:25:31.255 Total : 11997.00 46.86 166.51 47.27 4109.37 00:25:31.255 00:25:31.255 22:11:42 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:31.255 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.540 Initializing NVMe Controllers 00:25:34.540 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:34.540 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:34.540 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:34.540 Initialization complete. Launching workers. 00:25:34.540 ======================================================== 00:25:34.540 Latency(us) 00:25:34.540 Device Information : IOPS MiB/s Average min max 00:25:34.540 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19277.00 75.30 1660.10 455.39 6927.38 00:25:34.540 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.33 6114.92 9853.16 00:25:34.540 ======================================================== 00:25:34.540 Total : 23309.00 91.05 2751.82 455.39 9853.16 00:25:34.540 00:25:34.540 22:11:45 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:34.540 22:11:45 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:34.540 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.813 Initializing NVMe Controllers 00:25:39.813 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.813 Controller IO queue size 128, less than required. 00:25:39.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:39.814 Controller IO queue size 128, less than required. 00:25:39.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:39.814 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:39.814 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:39.814 Initialization complete. Launching workers. 00:25:39.814 ======================================================== 00:25:39.814 Latency(us) 00:25:39.814 Device Information : IOPS MiB/s Average min max 00:25:39.814 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4052.18 1013.04 31755.12 15360.11 67460.42 00:25:39.814 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4114.61 1028.65 30838.44 14505.91 48449.86 00:25:39.814 ======================================================== 00:25:39.814 Total : 8166.79 2041.70 31293.28 14505.91 67460.42 00:25:39.814 00:25:39.814 22:11:50 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:39.814 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.814 No valid NVMe controllers or AIO or URING devices found 00:25:39.814 Initializing NVMe Controllers 00:25:39.814 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.814 Controller IO queue size 128, less than required. 00:25:39.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:39.814 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:39.814 Controller IO queue size 128, less than required. 00:25:39.814 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:39.814 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:39.814 WARNING: Some requested NVMe devices were skipped 00:25:39.814 22:11:50 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:39.814 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.006 Initializing NVMe Controllers 00:25:44.006 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.006 Controller IO queue size 128, less than required. 00:25:44.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.006 Controller IO queue size 128, less than required. 00:25:44.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:44.006 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:44.006 Initialization complete. Launching workers. 00:25:44.006 00:25:44.006 ==================== 00:25:44.006 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:44.006 RDMA transport: 00:25:44.006 dev name: mlx5_0 00:25:44.006 polls: 423781 00:25:44.006 idle_polls: 419828 00:25:44.006 completions: 45669 00:25:44.006 queued_requests: 1 00:25:44.006 total_send_wrs: 22898 00:25:44.006 send_doorbell_updates: 3757 00:25:44.006 total_recv_wrs: 22898 00:25:44.006 recv_doorbell_updates: 3757 00:25:44.006 --------------------------------- 00:25:44.006 00:25:44.006 ==================== 00:25:44.006 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:44.006 RDMA transport: 00:25:44.006 dev name: mlx5_0 00:25:44.006 polls: 420256 00:25:44.006 idle_polls: 419978 00:25:44.006 completions: 20227 00:25:44.006 queued_requests: 1 00:25:44.006 total_send_wrs: 10177 00:25:44.006 send_doorbell_updates: 258 00:25:44.006 total_recv_wrs: 10177 00:25:44.006 recv_doorbell_updates: 258 00:25:44.006 --------------------------------- 00:25:44.006 ======================================================== 00:25:44.006 Latency(us) 00:25:44.006 Device Information : IOPS MiB/s Average min max 00:25:44.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5756.50 1439.12 22290.44 11238.96 52864.20 00:25:44.006 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2576.00 644.00 49666.98 29245.52 71966.51 00:25:44.006 ======================================================== 00:25:44.006 Total : 8332.50 2083.12 30753.92 11238.96 71966.51 00:25:44.006 00:25:44.006 22:11:54 -- host/perf.sh@66 -- # sync 00:25:44.006 22:11:54 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.006 22:11:54 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:44.006 22:11:54 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:44.006 22:11:54 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:50.613 22:12:00 -- host/perf.sh@72 -- # ls_guid=f50eb41a-31b1-4a2f-b36c-1f63ba87b61c 00:25:50.613 22:12:00 -- host/perf.sh@73 -- # get_lvs_free_mb f50eb41a-31b1-4a2f-b36c-1f63ba87b61c 00:25:50.613 22:12:00 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f50eb41a-31b1-4a2f-b36c-1f63ba87b61c 00:25:50.613 22:12:00 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:50.613 22:12:00 -- common/autotest_common.sh@1345 -- # local fc 00:25:50.613 22:12:00 -- common/autotest_common.sh@1346 -- # local cs 00:25:50.613 22:12:00 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:50.613 22:12:01 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:50.613 { 00:25:50.613 "uuid": "f50eb41a-31b1-4a2f-b36c-1f63ba87b61c", 00:25:50.613 "name": "lvs_0", 00:25:50.613 "base_bdev": "Nvme0n1", 00:25:50.613 "total_data_clusters": 476466, 00:25:50.613 "free_clusters": 476466, 00:25:50.613 "block_size": 512, 00:25:50.613 "cluster_size": 4194304 00:25:50.613 } 00:25:50.613 ]' 00:25:50.613 22:12:01 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f50eb41a-31b1-4a2f-b36c-1f63ba87b61c") .free_clusters' 00:25:50.613 22:12:01 -- common/autotest_common.sh@1348 -- # fc=476466 00:25:50.613 22:12:01 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f50eb41a-31b1-4a2f-b36c-1f63ba87b61c") .cluster_size' 00:25:50.613 22:12:01 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:50.613 22:12:01 -- common/autotest_common.sh@1352 -- # free_mb=1905864 00:25:50.613 22:12:01 -- common/autotest_common.sh@1353 -- # echo 1905864 00:25:50.613 1905864 00:25:50.613 22:12:01 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:50.613 22:12:01 -- host/perf.sh@78 -- # free_mb=20480 00:25:50.613 22:12:01 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f50eb41a-31b1-4a2f-b36c-1f63ba87b61c lbd_0 20480 00:25:50.613 22:12:01 -- host/perf.sh@80 -- # lb_guid=34366420-2575-46eb-afcf-04b81a3cafa6 00:25:50.613 22:12:01 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 34366420-2575-46eb-afcf-04b81a3cafa6 lvs_n_0 00:25:52.518 22:12:03 -- host/perf.sh@83 -- # ls_nested_guid=1a754013-3819-4a3c-9dbb-3c2a52c7b0c9 00:25:52.518 22:12:03 -- host/perf.sh@84 -- # get_lvs_free_mb 1a754013-3819-4a3c-9dbb-3c2a52c7b0c9 00:25:52.518 22:12:03 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1a754013-3819-4a3c-9dbb-3c2a52c7b0c9 00:25:52.518 22:12:03 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:52.518 22:12:03 -- common/autotest_common.sh@1345 -- # local fc 00:25:52.518 22:12:03 -- common/autotest_common.sh@1346 -- # local cs 00:25:52.518 22:12:03 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:52.775 22:12:03 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:52.775 { 00:25:52.775 "uuid": "f50eb41a-31b1-4a2f-b36c-1f63ba87b61c", 00:25:52.775 "name": "lvs_0", 00:25:52.775 "base_bdev": "Nvme0n1", 00:25:52.775 "total_data_clusters": 476466, 00:25:52.775 "free_clusters": 471346, 00:25:52.775 "block_size": 512, 00:25:52.775 "cluster_size": 4194304 00:25:52.775 }, 00:25:52.775 { 00:25:52.775 "uuid": "1a754013-3819-4a3c-9dbb-3c2a52c7b0c9", 00:25:52.775 "name": "lvs_n_0", 00:25:52.775 "base_bdev": "34366420-2575-46eb-afcf-04b81a3cafa6", 00:25:52.775 "total_data_clusters": 5114, 00:25:52.775 "free_clusters": 5114, 00:25:52.775 "block_size": 512, 00:25:52.775 "cluster_size": 4194304 00:25:52.775 } 00:25:52.775 ]' 00:25:52.775 22:12:03 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1a754013-3819-4a3c-9dbb-3c2a52c7b0c9") .free_clusters' 00:25:52.775 22:12:03 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:52.775 22:12:03 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1a754013-3819-4a3c-9dbb-3c2a52c7b0c9") .cluster_size' 00:25:52.775 22:12:03 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:52.775 22:12:03 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:52.775 22:12:03 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:52.775 20456 00:25:52.775 22:12:03 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:52.775 22:12:03 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1a754013-3819-4a3c-9dbb-3c2a52c7b0c9 lbd_nest_0 20456 00:25:53.033 22:12:04 -- host/perf.sh@88 -- # lb_nested_guid=66edb862-e490-4910-8b4f-2c65da4d4977 00:25:53.034 22:12:04 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:53.292 22:12:04 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:53.292 22:12:04 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 66edb862-e490-4910-8b4f-2c65da4d4977 00:25:53.292 22:12:04 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:53.549 22:12:04 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:53.549 22:12:04 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:53.549 22:12:04 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:53.549 22:12:04 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:53.549 22:12:04 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:53.549 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.762 Initializing NVMe Controllers 00:26:05.762 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.762 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.762 Initialization complete. Launching workers. 00:26:05.762 ======================================================== 00:26:05.762 Latency(us) 00:26:05.762 Device Information : IOPS MiB/s Average min max 00:26:05.762 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5901.70 2.88 168.97 67.72 8023.04 00:26:05.762 ======================================================== 00:26:05.762 Total : 5901.70 2.88 168.97 67.72 8023.04 00:26:05.762 00:26:05.762 22:12:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:05.762 22:12:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:05.762 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.974 Initializing NVMe Controllers 00:26:17.974 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.974 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:17.974 Initialization complete. Launching workers. 00:26:17.974 ======================================================== 00:26:17.974 Latency(us) 00:26:17.974 Device Information : IOPS MiB/s Average min max 00:26:17.974 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2668.50 333.56 374.39 156.91 8068.45 00:26:17.974 ======================================================== 00:26:17.974 Total : 2668.50 333.56 374.39 156.91 8068.45 00:26:17.974 00:26:17.974 22:12:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:17.974 22:12:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:17.974 22:12:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:17.974 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.996 Initializing NVMe Controllers 00:26:27.996 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:27.996 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:27.996 Initialization complete. Launching workers. 00:26:27.996 ======================================================== 00:26:27.996 Latency(us) 00:26:27.996 Device Information : IOPS MiB/s Average min max 00:26:27.996 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12105.10 5.91 2643.08 944.14 7891.61 00:26:27.996 ======================================================== 00:26:27.996 Total : 12105.10 5.91 2643.08 944.14 7891.61 00:26:27.996 00:26:27.996 22:12:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:27.996 22:12:38 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:27.996 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.215 Initializing NVMe Controllers 00:26:40.215 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.215 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:40.215 Initialization complete. Launching workers. 00:26:40.215 ======================================================== 00:26:40.215 Latency(us) 00:26:40.215 Device Information : IOPS MiB/s Average min max 00:26:40.215 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3996.82 499.60 8012.31 4892.58 15839.61 00:26:40.215 ======================================================== 00:26:40.215 Total : 3996.82 499.60 8012.31 4892.58 15839.61 00:26:40.215 00:26:40.215 22:12:50 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:40.215 22:12:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:40.215 22:12:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:40.215 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.425 Initializing NVMe Controllers 00:26:52.425 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.425 Controller IO queue size 128, less than required. 00:26:52.425 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.425 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.425 Initialization complete. Launching workers. 00:26:52.425 ======================================================== 00:26:52.425 Latency(us) 00:26:52.425 Device Information : IOPS MiB/s Average min max 00:26:52.425 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19422.70 9.48 6592.83 1892.63 14819.12 00:26:52.425 ======================================================== 00:26:52.425 Total : 19422.70 9.48 6592.83 1892.63 14819.12 00:26:52.425 00:26:52.425 22:13:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:52.425 22:13:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:52.425 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.402 Initializing NVMe Controllers 00:27:02.402 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:02.402 Controller IO queue size 128, less than required. 00:27:02.402 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:02.402 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:02.402 Initialization complete. Launching workers. 00:27:02.402 ======================================================== 00:27:02.402 Latency(us) 00:27:02.402 Device Information : IOPS MiB/s Average min max 00:27:02.402 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11262.70 1407.84 11365.44 3193.61 23440.52 00:27:02.402 ======================================================== 00:27:02.402 Total : 11262.70 1407.84 11365.44 3193.61 23440.52 00:27:02.402 00:27:02.402 22:13:12 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:02.402 22:13:13 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 66edb862-e490-4910-8b4f-2c65da4d4977 00:27:02.660 22:13:13 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:02.660 22:13:13 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34366420-2575-46eb-afcf-04b81a3cafa6 00:27:02.919 22:13:14 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:03.178 22:13:14 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:03.178 22:13:14 -- host/perf.sh@114 -- # nvmftestfini 00:27:03.178 22:13:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.178 22:13:14 -- nvmf/common.sh@116 -- # sync 00:27:03.178 22:13:14 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:03.178 22:13:14 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:03.178 22:13:14 -- nvmf/common.sh@119 -- # set +e 00:27:03.178 22:13:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.178 22:13:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:03.178 rmmod nvme_rdma 00:27:03.178 rmmod nvme_fabrics 00:27:03.178 22:13:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:03.178 22:13:14 -- nvmf/common.sh@123 -- # set -e 00:27:03.178 22:13:14 -- nvmf/common.sh@124 -- # return 0 00:27:03.178 22:13:14 -- nvmf/common.sh@477 -- # '[' -n 2300411 ']' 00:27:03.178 22:13:14 -- nvmf/common.sh@478 -- # killprocess 2300411 00:27:03.178 22:13:14 -- common/autotest_common.sh@926 -- # '[' -z 2300411 ']' 00:27:03.178 22:13:14 -- common/autotest_common.sh@930 -- # kill -0 2300411 00:27:03.178 22:13:14 -- common/autotest_common.sh@931 -- # uname 00:27:03.178 22:13:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:03.178 22:13:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2300411 00:27:03.178 22:13:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:03.178 22:13:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:03.178 22:13:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2300411' 00:27:03.178 killing process with pid 2300411 00:27:03.178 22:13:14 -- common/autotest_common.sh@945 -- # kill 2300411 00:27:03.178 22:13:14 -- common/autotest_common.sh@950 -- # wait 2300411 00:27:05.712 22:13:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:05.712 22:13:16 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:05.712 00:27:05.712 real 1m53.078s 00:27:05.712 user 7m2.184s 00:27:05.712 sys 0m8.346s 00:27:05.712 22:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.712 22:13:16 -- common/autotest_common.sh@10 -- # set +x 00:27:05.712 ************************************ 00:27:05.712 END TEST nvmf_perf 00:27:05.712 ************************************ 00:27:05.712 22:13:16 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:05.712 22:13:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:05.712 22:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.712 22:13:16 -- common/autotest_common.sh@10 -- # set +x 00:27:05.712 ************************************ 00:27:05.712 START TEST nvmf_fio_host 00:27:05.712 ************************************ 00:27:05.712 22:13:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:05.971 * Looking for test storage... 00:27:05.971 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:05.971 22:13:17 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:05.971 22:13:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.971 22:13:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.971 22:13:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.971 22:13:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.971 22:13:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.971 22:13:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.972 22:13:17 -- paths/export.sh@5 -- # export PATH 00:27:05.972 22:13:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.972 22:13:17 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.972 22:13:17 -- nvmf/common.sh@7 -- # uname -s 00:27:05.972 22:13:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.972 22:13:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.972 22:13:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.972 22:13:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.972 22:13:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.972 22:13:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.972 22:13:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.972 22:13:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.972 22:13:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.972 22:13:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.972 22:13:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:05.972 22:13:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:05.972 22:13:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.972 22:13:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.972 22:13:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.972 22:13:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:05.972 22:13:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.972 22:13:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.972 22:13:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.972 22:13:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.972 22:13:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.972 22:13:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.972 22:13:17 -- paths/export.sh@5 -- # export PATH 00:27:05.972 22:13:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.972 22:13:17 -- nvmf/common.sh@46 -- # : 0 00:27:05.972 22:13:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:05.972 22:13:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:05.972 22:13:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:05.972 22:13:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.972 22:13:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.972 22:13:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:05.972 22:13:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:05.972 22:13:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:05.972 22:13:17 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:05.972 22:13:17 -- host/fio.sh@14 -- # nvmftestinit 00:27:05.972 22:13:17 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:05.972 22:13:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.972 22:13:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:05.972 22:13:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:05.972 22:13:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:05.972 22:13:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.972 22:13:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.972 22:13:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.972 22:13:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:05.972 22:13:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:05.972 22:13:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:05.972 22:13:17 -- common/autotest_common.sh@10 -- # set +x 00:27:14.124 22:13:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:14.124 22:13:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:14.124 22:13:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:14.124 22:13:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:14.124 22:13:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:14.124 22:13:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:14.124 22:13:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:14.124 22:13:25 -- nvmf/common.sh@294 -- # net_devs=() 00:27:14.124 22:13:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:14.124 22:13:25 -- nvmf/common.sh@295 -- # e810=() 00:27:14.124 22:13:25 -- nvmf/common.sh@295 -- # local -ga e810 00:27:14.124 22:13:25 -- nvmf/common.sh@296 -- # x722=() 00:27:14.124 22:13:25 -- nvmf/common.sh@296 -- # local -ga x722 00:27:14.124 22:13:25 -- nvmf/common.sh@297 -- # mlx=() 00:27:14.124 22:13:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:14.124 22:13:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:14.124 22:13:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:14.125 22:13:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:14.125 22:13:25 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:14.125 22:13:25 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:14.125 22:13:25 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:14.125 22:13:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:14.125 22:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:14.125 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:14.125 22:13:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:14.125 22:13:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:14.125 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:14.125 22:13:25 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:14.125 22:13:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:14.125 22:13:25 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.125 22:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.125 22:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.125 22:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:14.125 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:14.125 22:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.125 22:13:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:14.125 22:13:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:14.125 22:13:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:14.125 22:13:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:14.125 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:14.125 22:13:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:14.125 22:13:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:14.125 22:13:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:14.125 22:13:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:14.125 22:13:25 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:14.125 22:13:25 -- nvmf/common.sh@57 -- # uname 00:27:14.125 22:13:25 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:14.125 22:13:25 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:14.125 22:13:25 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:14.125 22:13:25 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:14.125 22:13:25 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:14.125 22:13:25 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:14.125 22:13:25 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:14.125 22:13:25 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:14.125 22:13:25 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:14.125 22:13:25 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:14.125 22:13:25 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:14.125 22:13:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:14.125 22:13:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:14.125 22:13:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:14.125 22:13:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:14.125 22:13:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:14.125 22:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:14.125 22:13:25 -- nvmf/common.sh@104 -- # continue 2 00:27:14.125 22:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:14.125 22:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:14.125 22:13:25 -- nvmf/common.sh@104 -- # continue 2 00:27:14.125 22:13:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:14.125 22:13:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:14.125 22:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:14.125 22:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:14.125 22:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:14.125 22:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:14.125 22:13:25 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:14.125 22:13:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:14.125 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:14.125 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:14.125 altname enp217s0f0np0 00:27:14.125 altname ens818f0np0 00:27:14.125 inet 192.168.100.8/24 scope global mlx_0_0 00:27:14.125 valid_lft forever preferred_lft forever 00:27:14.125 22:13:25 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:14.125 22:13:25 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:14.125 22:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:14.125 22:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:14.125 22:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:14.125 22:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:14.125 22:13:25 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:14.125 22:13:25 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:14.125 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:14.125 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:14.125 altname enp217s0f1np1 00:27:14.125 altname ens818f1np1 00:27:14.125 inet 192.168.100.9/24 scope global mlx_0_1 00:27:14.125 valid_lft forever preferred_lft forever 00:27:14.125 22:13:25 -- nvmf/common.sh@410 -- # return 0 00:27:14.125 22:13:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:14.125 22:13:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:14.125 22:13:25 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:14.125 22:13:25 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:14.125 22:13:25 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:14.125 22:13:25 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:14.125 22:13:25 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:14.384 22:13:25 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:14.384 22:13:25 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:14.384 22:13:25 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:14.384 22:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:14.384 22:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:14.384 22:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:14.384 22:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:14.384 22:13:25 -- nvmf/common.sh@104 -- # continue 2 00:27:14.384 22:13:25 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:14.384 22:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:14.384 22:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:14.384 22:13:25 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:14.384 22:13:25 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:14.384 22:13:25 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:14.384 22:13:25 -- nvmf/common.sh@104 -- # continue 2 00:27:14.384 22:13:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:14.384 22:13:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:14.384 22:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:14.384 22:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:14.384 22:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:14.384 22:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:14.384 22:13:25 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:14.384 22:13:25 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:14.384 22:13:25 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:14.384 22:13:25 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:14.384 22:13:25 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:14.384 22:13:25 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:14.384 22:13:25 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:14.384 192.168.100.9' 00:27:14.384 22:13:25 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:14.384 192.168.100.9' 00:27:14.384 22:13:25 -- nvmf/common.sh@445 -- # head -n 1 00:27:14.384 22:13:25 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:14.384 22:13:25 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:14.384 192.168.100.9' 00:27:14.384 22:13:25 -- nvmf/common.sh@446 -- # tail -n +2 00:27:14.384 22:13:25 -- nvmf/common.sh@446 -- # head -n 1 00:27:14.384 22:13:25 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:14.384 22:13:25 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:14.384 22:13:25 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:14.384 22:13:25 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:14.384 22:13:25 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:14.384 22:13:25 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:14.384 22:13:25 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:14.384 22:13:25 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:14.384 22:13:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.384 22:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:14.384 22:13:25 -- host/fio.sh@24 -- # nvmfpid=2322458 00:27:14.384 22:13:25 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:14.384 22:13:25 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:14.384 22:13:25 -- host/fio.sh@28 -- # waitforlisten 2322458 00:27:14.384 22:13:25 -- common/autotest_common.sh@819 -- # '[' -z 2322458 ']' 00:27:14.384 22:13:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.384 22:13:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:14.384 22:13:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.384 22:13:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:14.384 22:13:25 -- common/autotest_common.sh@10 -- # set +x 00:27:14.384 [2024-07-26 22:13:25.508533] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:14.384 [2024-07-26 22:13:25.508579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.384 [2024-07-26 22:13:25.594344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:14.643 [2024-07-26 22:13:25.632762] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:14.643 [2024-07-26 22:13:25.632889] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.643 [2024-07-26 22:13:25.632899] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.643 [2024-07-26 22:13:25.632908] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.643 [2024-07-26 22:13:25.632949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.643 [2024-07-26 22:13:25.633047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:14.643 [2024-07-26 22:13:25.633131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:14.643 [2024-07-26 22:13:25.633133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.209 22:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:15.209 22:13:26 -- common/autotest_common.sh@852 -- # return 0 00:27:15.209 22:13:26 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:15.468 [2024-07-26 22:13:26.471689] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x128d4b0/0x12919a0) succeed. 00:27:15.468 [2024-07-26 22:13:26.482169] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x128eaa0/0x12d3030) succeed. 00:27:15.468 22:13:26 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:15.468 22:13:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:15.468 22:13:26 -- common/autotest_common.sh@10 -- # set +x 00:27:15.468 22:13:26 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:15.726 Malloc1 00:27:15.726 22:13:26 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:15.985 22:13:27 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:15.985 22:13:27 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:16.243 [2024-07-26 22:13:27.352060] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:16.243 22:13:27 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:16.502 22:13:27 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:16.502 22:13:27 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:16.502 22:13:27 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:16.502 22:13:27 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:16.502 22:13:27 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.502 22:13:27 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:16.502 22:13:27 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.502 22:13:27 -- common/autotest_common.sh@1320 -- # shift 00:27:16.502 22:13:27 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:16.502 22:13:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:16.502 22:13:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:16.502 22:13:27 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:16.502 22:13:27 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:16.502 22:13:27 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:16.502 22:13:27 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:16.502 22:13:27 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:16.760 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.760 fio-3.35 00:27:16.760 Starting 1 thread 00:27:16.760 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.294 00:27:19.294 test: (groupid=0, jobs=1): err= 0: pid=2323143: Fri Jul 26 22:13:30 2024 00:27:19.294 read: IOPS=18.7k, BW=73.2MiB/s (76.7MB/s)(147MiB/2003msec) 00:27:19.294 slat (nsec): min=1333, max=41413, avg=1466.97, stdev=486.46 00:27:19.294 clat (usec): min=1846, max=6059, avg=3392.96, stdev=76.81 00:27:19.294 lat (usec): min=1868, max=6060, avg=3394.43, stdev=76.74 00:27:19.294 clat percentiles (usec): 00:27:19.294 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3392], 00:27:19.294 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:27:19.294 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3425], 95.00th=[ 3425], 00:27:19.294 | 99.00th=[ 3425], 99.50th=[ 3589], 99.90th=[ 4359], 99.95th=[ 5211], 00:27:19.294 | 99.99th=[ 6063] 00:27:19.294 bw ( KiB/s): min=73336, max=75560, per=99.98%, avg=74914.00, stdev=1057.24, samples=4 00:27:19.294 iops : min=18334, max=18890, avg=18728.50, stdev=264.31, samples=4 00:27:19.294 write: IOPS=18.7k, BW=73.2MiB/s (76.7MB/s)(147MiB/2003msec); 0 zone resets 00:27:19.294 slat (nsec): min=1388, max=20045, avg=1558.88, stdev=483.17 00:27:19.294 clat (usec): min=2523, max=6070, avg=3391.40, stdev=78.22 00:27:19.294 lat (usec): min=2528, max=6072, avg=3392.96, stdev=78.17 00:27:19.294 clat percentiles (usec): 00:27:19.294 | 1.00th=[ 3359], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3392], 00:27:19.294 | 30.00th=[ 3392], 40.00th=[ 3392], 50.00th=[ 3392], 60.00th=[ 3392], 00:27:19.294 | 70.00th=[ 3392], 80.00th=[ 3392], 90.00th=[ 3425], 95.00th=[ 3425], 00:27:19.294 | 99.00th=[ 3425], 99.50th=[ 3687], 99.90th=[ 4293], 99.95th=[ 5211], 00:27:19.294 | 99.99th=[ 6063] 00:27:19.294 bw ( KiB/s): min=73328, max=75584, per=99.98%, avg=74924.00, stdev=1075.03, samples=4 00:27:19.294 iops : min=18332, max=18896, avg=18731.00, stdev=268.76, samples=4 00:27:19.294 lat (msec) : 2=0.01%, 4=99.88%, 10=0.11% 00:27:19.294 cpu : usr=99.50%, sys=0.15%, ctx=15, majf=0, minf=2 00:27:19.294 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:19.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:19.294 issued rwts: total=37520,37526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.294 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:19.294 00:27:19.294 Run status group 0 (all jobs): 00:27:19.294 READ: bw=73.2MiB/s (76.7MB/s), 73.2MiB/s-73.2MiB/s (76.7MB/s-76.7MB/s), io=147MiB (154MB), run=2003-2003msec 00:27:19.294 WRITE: bw=73.2MiB/s (76.7MB/s), 73.2MiB/s-73.2MiB/s (76.7MB/s-76.7MB/s), io=147MiB (154MB), run=2003-2003msec 00:27:19.294 22:13:30 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:19.294 22:13:30 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:19.294 22:13:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:19.294 22:13:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.294 22:13:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:19.295 22:13:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:19.295 22:13:30 -- common/autotest_common.sh@1320 -- # shift 00:27:19.295 22:13:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:19.295 22:13:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:19.295 22:13:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:19.295 22:13:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:19.295 22:13:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:19.295 22:13:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:19.295 22:13:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:19.295 22:13:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:19.554 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:19.554 fio-3.35 00:27:19.554 Starting 1 thread 00:27:19.554 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.083 00:27:22.083 test: (groupid=0, jobs=1): err= 0: pid=2323673: Fri Jul 26 22:13:32 2024 00:27:22.083 read: IOPS=14.9k, BW=232MiB/s (243MB/s)(457MiB/1969msec) 00:27:22.083 slat (nsec): min=2226, max=34683, avg=2603.81, stdev=1003.89 00:27:22.083 clat (usec): min=492, max=8529, avg=1552.68, stdev=1206.28 00:27:22.083 lat (usec): min=495, max=8545, avg=1555.29, stdev=1206.67 00:27:22.083 clat percentiles (usec): 00:27:22.083 | 1.00th=[ 668], 5.00th=[ 758], 10.00th=[ 816], 20.00th=[ 898], 00:27:22.083 | 30.00th=[ 971], 40.00th=[ 1057], 50.00th=[ 1172], 60.00th=[ 1270], 00:27:22.083 | 70.00th=[ 1401], 80.00th=[ 1582], 90.00th=[ 3359], 95.00th=[ 4686], 00:27:22.083 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7635], 00:27:22.083 | 99.99th=[ 8455] 00:27:22.083 bw ( KiB/s): min=107520, max=118848, per=48.46%, avg=115200.00, stdev=5262.22, samples=4 00:27:22.083 iops : min= 6720, max= 7428, avg=7200.00, stdev=328.89, samples=4 00:27:22.083 write: IOPS=8307, BW=130MiB/s (136MB/s)(234MiB/1801msec); 0 zone resets 00:27:22.083 slat (usec): min=26, max=115, avg=29.53, stdev= 6.33 00:27:22.083 clat (usec): min=4005, max=19021, avg=12197.96, stdev=1665.69 00:27:22.083 lat (usec): min=4032, max=19049, avg=12227.49, stdev=1665.28 00:27:22.083 clat percentiles (usec): 00:27:22.083 | 1.00th=[ 7242], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10945], 00:27:22.083 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:27:22.083 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14222], 95.00th=[14746], 00:27:22.083 | 99.00th=[16188], 99.50th=[16581], 99.90th=[18482], 99.95th=[18744], 00:27:22.083 | 99.99th=[19006] 00:27:22.083 bw ( KiB/s): min=111296, max=124576, per=89.38%, avg=118808.00, stdev=6015.40, samples=4 00:27:22.083 iops : min= 6956, max= 7786, avg=7425.50, stdev=375.96, samples=4 00:27:22.083 lat (usec) : 500=0.01%, 750=3.07%, 1000=19.52% 00:27:22.083 lat (msec) : 2=35.35%, 4=2.60%, 10=8.10%, 20=31.36% 00:27:22.083 cpu : usr=96.06%, sys=1.95%, ctx=225, majf=0, minf=1 00:27:22.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:22.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:22.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:22.083 issued rwts: total=29253,14962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:22.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:22.083 00:27:22.083 Run status group 0 (all jobs): 00:27:22.083 READ: bw=232MiB/s (243MB/s), 232MiB/s-232MiB/s (243MB/s-243MB/s), io=457MiB (479MB), run=1969-1969msec 00:27:22.083 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=234MiB (245MB), run=1801-1801msec 00:27:22.083 22:13:33 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.083 22:13:33 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:22.083 22:13:33 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:22.083 22:13:33 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:22.083 22:13:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:22.083 22:13:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:22.083 22:13:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:22.083 22:13:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:22.083 22:13:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:22.340 22:13:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:22.340 22:13:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:27:22.340 22:13:33 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:27:25.621 Nvme0n1 00:27:25.621 22:13:36 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:30.890 22:13:41 -- host/fio.sh@53 -- # ls_guid=b81870f9-7bc2-492a-bca6-d351aea8388c 00:27:30.890 22:13:41 -- host/fio.sh@54 -- # get_lvs_free_mb b81870f9-7bc2-492a-bca6-d351aea8388c 00:27:30.890 22:13:41 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b81870f9-7bc2-492a-bca6-d351aea8388c 00:27:30.890 22:13:41 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:30.890 22:13:41 -- common/autotest_common.sh@1345 -- # local fc 00:27:30.890 22:13:41 -- common/autotest_common.sh@1346 -- # local cs 00:27:30.890 22:13:41 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:30.890 22:13:42 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:30.890 { 00:27:30.890 "uuid": "b81870f9-7bc2-492a-bca6-d351aea8388c", 00:27:30.890 "name": "lvs_0", 00:27:30.890 "base_bdev": "Nvme0n1", 00:27:30.890 "total_data_clusters": 1862, 00:27:30.890 "free_clusters": 1862, 00:27:30.890 "block_size": 512, 00:27:30.890 "cluster_size": 1073741824 00:27:30.890 } 00:27:30.890 ]' 00:27:30.890 22:13:42 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b81870f9-7bc2-492a-bca6-d351aea8388c") .free_clusters' 00:27:30.890 22:13:42 -- common/autotest_common.sh@1348 -- # fc=1862 00:27:30.890 22:13:42 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b81870f9-7bc2-492a-bca6-d351aea8388c") .cluster_size' 00:27:30.890 22:13:42 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:30.890 22:13:42 -- common/autotest_common.sh@1352 -- # free_mb=1906688 00:27:30.890 22:13:42 -- common/autotest_common.sh@1353 -- # echo 1906688 00:27:30.890 1906688 00:27:30.890 22:13:42 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:31.457 1a038782-b20d-4936-8688-c8190e90e5f6 00:27:31.457 22:13:42 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:31.715 22:13:42 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:31.974 22:13:42 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:31.974 22:13:43 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:31.974 22:13:43 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:31.974 22:13:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:31.974 22:13:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.974 22:13:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:31.974 22:13:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.974 22:13:43 -- common/autotest_common.sh@1320 -- # shift 00:27:31.974 22:13:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:31.974 22:13:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:31.974 22:13:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:31.974 22:13:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:31.974 22:13:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:31.974 22:13:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:31.974 22:13:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:31.974 22:13:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:32.568 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:32.568 fio-3.35 00:27:32.568 Starting 1 thread 00:27:32.568 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.101 00:27:35.101 test: (groupid=0, jobs=1): err= 0: pid=2325976: Fri Jul 26 22:13:45 2024 00:27:35.101 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(78.9MiB/2005msec) 00:27:35.101 slat (nsec): min=1346, max=20694, avg=1464.36, stdev=293.25 00:27:35.101 clat (usec): min=194, max=372158, avg=6323.01, stdev=20766.73 00:27:35.101 lat (usec): min=195, max=372161, avg=6324.47, stdev=20766.77 00:27:35.101 clat percentiles (msec): 00:27:35.101 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:35.101 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:35.101 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:35.101 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 372], 99.95th=[ 372], 00:27:35.101 | 99.99th=[ 372] 00:27:35.101 bw ( KiB/s): min=11896, max=49808, per=99.91%, avg=40248.00, stdev=18901.84, samples=4 00:27:35.101 iops : min= 2974, max=12452, avg=10062.00, stdev=4725.46, samples=4 00:27:35.101 write: IOPS=10.1k, BW=39.3MiB/s (41.3MB/s)(78.9MiB/2005msec); 0 zone resets 00:27:35.101 slat (nsec): min=1392, max=17430, avg=1579.26, stdev=337.52 00:27:35.101 clat (usec): min=159, max=372423, avg=6282.67, stdev=20187.05 00:27:35.101 lat (usec): min=161, max=372426, avg=6284.25, stdev=20187.11 00:27:35.101 clat percentiles (msec): 00:27:35.101 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:27:35.101 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:35.101 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:35.101 | 99.00th=[ 6], 99.50th=[ 6], 99.90th=[ 372], 99.95th=[ 372], 00:27:35.101 | 99.99th=[ 372] 00:27:35.101 bw ( KiB/s): min=12520, max=49544, per=99.96%, avg=40276.00, stdev=18504.00, samples=4 00:27:35.101 iops : min= 3130, max=12386, avg=10069.00, stdev=4626.00, samples=4 00:27:35.101 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:27:35.101 lat (msec) : 2=0.03%, 4=0.30%, 10=99.30%, 500=0.32% 00:27:35.101 cpu : usr=99.40%, sys=0.10%, ctx=16, majf=0, minf=11 00:27:35.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:35.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:35.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:35.101 issued rwts: total=20192,20197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:35.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:35.101 00:27:35.101 Run status group 0 (all jobs): 00:27:35.101 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=78.9MiB (82.7MB), run=2005-2005msec 00:27:35.101 WRITE: bw=39.3MiB/s (41.3MB/s), 39.3MiB/s-39.3MiB/s (41.3MB/s-41.3MB/s), io=78.9MiB (82.7MB), run=2005-2005msec 00:27:35.101 22:13:45 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:35.101 22:13:46 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:36.479 22:13:47 -- host/fio.sh@64 -- # ls_nested_guid=2ef833b4-52b0-4afb-a13d-81596bbb4451 00:27:36.479 22:13:47 -- host/fio.sh@65 -- # get_lvs_free_mb 2ef833b4-52b0-4afb-a13d-81596bbb4451 00:27:36.479 22:13:47 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2ef833b4-52b0-4afb-a13d-81596bbb4451 00:27:36.479 22:13:47 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:36.479 22:13:47 -- common/autotest_common.sh@1345 -- # local fc 00:27:36.479 22:13:47 -- common/autotest_common.sh@1346 -- # local cs 00:27:36.479 22:13:47 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:36.479 22:13:47 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:36.479 { 00:27:36.479 "uuid": "b81870f9-7bc2-492a-bca6-d351aea8388c", 00:27:36.479 "name": "lvs_0", 00:27:36.479 "base_bdev": "Nvme0n1", 00:27:36.479 "total_data_clusters": 1862, 00:27:36.479 "free_clusters": 0, 00:27:36.479 "block_size": 512, 00:27:36.479 "cluster_size": 1073741824 00:27:36.479 }, 00:27:36.479 { 00:27:36.479 "uuid": "2ef833b4-52b0-4afb-a13d-81596bbb4451", 00:27:36.479 "name": "lvs_n_0", 00:27:36.479 "base_bdev": "1a038782-b20d-4936-8688-c8190e90e5f6", 00:27:36.479 "total_data_clusters": 476206, 00:27:36.479 "free_clusters": 476206, 00:27:36.479 "block_size": 512, 00:27:36.479 "cluster_size": 4194304 00:27:36.479 } 00:27:36.479 ]' 00:27:36.479 22:13:47 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2ef833b4-52b0-4afb-a13d-81596bbb4451") .free_clusters' 00:27:36.479 22:13:47 -- common/autotest_common.sh@1348 -- # fc=476206 00:27:36.479 22:13:47 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2ef833b4-52b0-4afb-a13d-81596bbb4451") .cluster_size' 00:27:36.479 22:13:47 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:36.479 22:13:47 -- common/autotest_common.sh@1352 -- # free_mb=1904824 00:27:36.479 22:13:47 -- common/autotest_common.sh@1353 -- # echo 1904824 00:27:36.479 1904824 00:27:36.479 22:13:47 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:37.415 19f08606-e04f-4266-8fc7-4bf218476df1 00:27:37.415 22:13:48 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:37.416 22:13:48 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:37.674 22:13:48 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:37.933 22:13:48 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:37.933 22:13:48 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:37.933 22:13:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:37.933 22:13:48 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.933 22:13:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:37.933 22:13:48 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.933 22:13:48 -- common/autotest_common.sh@1320 -- # shift 00:27:37.933 22:13:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:37.933 22:13:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:37.933 22:13:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:37.933 22:13:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:37.933 22:13:48 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:37.933 22:13:48 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:37.933 22:13:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:37.933 22:13:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:38.192 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:38.192 fio-3.35 00:27:38.192 Starting 1 thread 00:27:38.192 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.727 00:27:40.727 test: (groupid=0, jobs=1): err= 0: pid=2327088: Fri Jul 26 22:13:51 2024 00:27:40.727 read: IOPS=10.6k, BW=41.4MiB/s (43.5MB/s)(83.1MiB/2005msec) 00:27:40.727 slat (nsec): min=1345, max=18778, avg=1462.32, stdev=320.86 00:27:40.727 clat (usec): min=2590, max=9978, avg=5959.94, stdev=163.32 00:27:40.727 lat (usec): min=2593, max=9980, avg=5961.40, stdev=163.30 00:27:40.727 clat percentiles (usec): 00:27:40.727 | 1.00th=[ 5866], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5932], 00:27:40.727 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5997], 00:27:40.727 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:40.727 | 99.00th=[ 6063], 99.50th=[ 6849], 99.90th=[ 8455], 99.95th=[ 9634], 00:27:40.727 | 99.99th=[ 9896] 00:27:40.727 bw ( KiB/s): min=40752, max=43216, per=99.97%, avg=42424.00, stdev=1147.38, samples=4 00:27:40.727 iops : min=10188, max=10804, avg=10606.00, stdev=286.84, samples=4 00:27:40.727 write: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(83.1MiB/2005msec); 0 zone resets 00:27:40.727 slat (nsec): min=1388, max=12908, avg=1578.63, stdev=235.35 00:27:40.727 clat (usec): min=2585, max=9985, avg=5980.91, stdev=172.63 00:27:40.727 lat (usec): min=2590, max=9987, avg=5982.49, stdev=172.62 00:27:40.727 clat percentiles (usec): 00:27:40.727 | 1.00th=[ 5866], 5.00th=[ 5932], 10.00th=[ 5932], 20.00th=[ 5932], 00:27:40.727 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 5997], 60.00th=[ 5997], 00:27:40.727 | 70.00th=[ 5997], 80.00th=[ 5997], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:40.727 | 99.00th=[ 6128], 99.50th=[ 6849], 99.90th=[ 8455], 99.95th=[ 9896], 00:27:40.727 | 99.99th=[10028] 00:27:40.727 bw ( KiB/s): min=41224, max=42912, per=99.93%, avg=42392.00, stdev=787.12, samples=4 00:27:40.727 iops : min=10306, max=10728, avg=10598.00, stdev=196.78, samples=4 00:27:40.727 lat (msec) : 4=0.03%, 10=99.97% 00:27:40.727 cpu : usr=99.60%, sys=0.10%, ctx=15, majf=0, minf=11 00:27:40.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:40.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:40.727 issued rwts: total=21271,21264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.727 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:40.727 00:27:40.727 Run status group 0 (all jobs): 00:27:40.727 READ: bw=41.4MiB/s (43.5MB/s), 41.4MiB/s-41.4MiB/s (43.5MB/s-43.5MB/s), io=83.1MiB (87.1MB), run=2005-2005msec 00:27:40.727 WRITE: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=83.1MiB (87.1MB), run=2005-2005msec 00:27:40.727 22:13:51 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:40.727 22:13:51 -- host/fio.sh@74 -- # sync 00:27:40.727 22:13:51 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:48.841 22:13:59 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:48.841 22:13:59 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:54.113 22:14:04 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:54.113 22:14:05 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:57.401 22:14:08 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:57.401 22:14:08 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:57.401 22:14:08 -- host/fio.sh@86 -- # nvmftestfini 00:27:57.401 22:14:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:57.401 22:14:08 -- nvmf/common.sh@116 -- # sync 00:27:57.401 22:14:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:57.401 22:14:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:57.401 22:14:08 -- nvmf/common.sh@119 -- # set +e 00:27:57.401 22:14:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:57.401 22:14:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:57.401 rmmod nvme_rdma 00:27:57.401 rmmod nvme_fabrics 00:27:57.401 22:14:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:57.401 22:14:08 -- nvmf/common.sh@123 -- # set -e 00:27:57.401 22:14:08 -- nvmf/common.sh@124 -- # return 0 00:27:57.401 22:14:08 -- nvmf/common.sh@477 -- # '[' -n 2322458 ']' 00:27:57.401 22:14:08 -- nvmf/common.sh@478 -- # killprocess 2322458 00:27:57.401 22:14:08 -- common/autotest_common.sh@926 -- # '[' -z 2322458 ']' 00:27:57.401 22:14:08 -- common/autotest_common.sh@930 -- # kill -0 2322458 00:27:57.401 22:14:08 -- common/autotest_common.sh@931 -- # uname 00:27:57.401 22:14:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:57.401 22:14:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2322458 00:27:57.401 22:14:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:57.401 22:14:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:57.401 22:14:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2322458' 00:27:57.401 killing process with pid 2322458 00:27:57.401 22:14:08 -- common/autotest_common.sh@945 -- # kill 2322458 00:27:57.401 22:14:08 -- common/autotest_common.sh@950 -- # wait 2322458 00:27:57.401 22:14:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:57.401 22:14:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:57.401 00:27:57.401 real 0m51.608s 00:27:57.401 user 3m37.018s 00:27:57.401 sys 0m9.046s 00:27:57.401 22:14:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.401 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 ************************************ 00:27:57.401 END TEST nvmf_fio_host 00:27:57.401 ************************************ 00:27:57.401 22:14:08 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:57.401 22:14:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:57.401 22:14:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.401 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:27:57.401 ************************************ 00:27:57.401 START TEST nvmf_failover 00:27:57.401 ************************************ 00:27:57.401 22:14:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:57.661 * Looking for test storage... 00:27:57.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:57.661 22:14:08 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.661 22:14:08 -- nvmf/common.sh@7 -- # uname -s 00:27:57.661 22:14:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.661 22:14:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.661 22:14:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.661 22:14:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.661 22:14:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.661 22:14:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.661 22:14:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.661 22:14:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.661 22:14:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.661 22:14:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.661 22:14:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:57.661 22:14:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:57.661 22:14:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.661 22:14:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.661 22:14:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:57.661 22:14:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:57.661 22:14:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.661 22:14:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.661 22:14:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.661 22:14:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.661 22:14:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.661 22:14:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.661 22:14:08 -- paths/export.sh@5 -- # export PATH 00:27:57.661 22:14:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.661 22:14:08 -- nvmf/common.sh@46 -- # : 0 00:27:57.661 22:14:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:57.661 22:14:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:57.661 22:14:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:57.661 22:14:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.661 22:14:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.661 22:14:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:57.661 22:14:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:57.661 22:14:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:57.661 22:14:08 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.661 22:14:08 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.661 22:14:08 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:57.661 22:14:08 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:57.661 22:14:08 -- host/failover.sh@18 -- # nvmftestinit 00:27:57.661 22:14:08 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:57.661 22:14:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.661 22:14:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:57.661 22:14:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:57.661 22:14:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:57.661 22:14:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.661 22:14:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.661 22:14:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.661 22:14:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:57.661 22:14:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:57.661 22:14:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:57.661 22:14:08 -- common/autotest_common.sh@10 -- # set +x 00:28:05.819 22:14:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:05.819 22:14:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:05.819 22:14:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:05.819 22:14:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:05.819 22:14:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:05.819 22:14:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:05.819 22:14:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:05.819 22:14:16 -- nvmf/common.sh@294 -- # net_devs=() 00:28:05.819 22:14:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:05.819 22:14:16 -- nvmf/common.sh@295 -- # e810=() 00:28:05.819 22:14:16 -- nvmf/common.sh@295 -- # local -ga e810 00:28:05.819 22:14:16 -- nvmf/common.sh@296 -- # x722=() 00:28:05.819 22:14:16 -- nvmf/common.sh@296 -- # local -ga x722 00:28:05.819 22:14:16 -- nvmf/common.sh@297 -- # mlx=() 00:28:05.819 22:14:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:05.819 22:14:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.819 22:14:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:05.819 22:14:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:05.819 22:14:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:05.819 22:14:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:05.819 22:14:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:05.819 22:14:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:05.819 22:14:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:05.819 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:05.819 22:14:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.819 22:14:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:05.819 22:14:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:05.819 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:05.819 22:14:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.819 22:14:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.820 22:14:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:05.820 22:14:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.820 22:14:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:05.820 22:14:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.820 22:14:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:05.820 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.820 22:14:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.820 22:14:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:05.820 22:14:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.820 22:14:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:05.820 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.820 22:14:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:05.820 22:14:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:05.820 22:14:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:05.820 22:14:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:05.820 22:14:16 -- nvmf/common.sh@57 -- # uname 00:28:05.820 22:14:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:05.820 22:14:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:05.820 22:14:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:05.820 22:14:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:05.820 22:14:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:05.820 22:14:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:05.820 22:14:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:05.820 22:14:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:05.820 22:14:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:05.820 22:14:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:05.820 22:14:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:05.820 22:14:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.820 22:14:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:05.820 22:14:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:05.820 22:14:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.820 22:14:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:05.820 22:14:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@104 -- # continue 2 00:28:05.820 22:14:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@104 -- # continue 2 00:28:05.820 22:14:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:05.820 22:14:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:05.820 22:14:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:05.820 22:14:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:05.820 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.820 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:05.820 altname enp217s0f0np0 00:28:05.820 altname ens818f0np0 00:28:05.820 inet 192.168.100.8/24 scope global mlx_0_0 00:28:05.820 valid_lft forever preferred_lft forever 00:28:05.820 22:14:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:05.820 22:14:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:05.820 22:14:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:05.820 22:14:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:05.820 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.820 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:05.820 altname enp217s0f1np1 00:28:05.820 altname ens818f1np1 00:28:05.820 inet 192.168.100.9/24 scope global mlx_0_1 00:28:05.820 valid_lft forever preferred_lft forever 00:28:05.820 22:14:16 -- nvmf/common.sh@410 -- # return 0 00:28:05.820 22:14:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:05.820 22:14:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:05.820 22:14:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:05.820 22:14:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:05.820 22:14:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.820 22:14:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:05.820 22:14:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:05.820 22:14:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.820 22:14:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:05.820 22:14:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@104 -- # continue 2 00:28:05.820 22:14:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.820 22:14:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.820 22:14:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@104 -- # continue 2 00:28:05.820 22:14:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:05.820 22:14:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:05.820 22:14:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:05.820 22:14:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:05.820 22:14:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:05.820 22:14:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:05.820 192.168.100.9' 00:28:05.820 22:14:16 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:05.820 192.168.100.9' 00:28:05.820 22:14:16 -- nvmf/common.sh@445 -- # head -n 1 00:28:05.820 22:14:16 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:05.820 22:14:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:05.820 192.168.100.9' 00:28:05.820 22:14:16 -- nvmf/common.sh@446 -- # tail -n +2 00:28:05.820 22:14:16 -- nvmf/common.sh@446 -- # head -n 1 00:28:05.820 22:14:16 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:05.820 22:14:16 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:05.820 22:14:16 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:05.820 22:14:16 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:05.820 22:14:16 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:05.820 22:14:16 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:05.820 22:14:16 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:05.820 22:14:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:05.820 22:14:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:05.820 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:28:05.820 22:14:16 -- nvmf/common.sh@469 -- # nvmfpid=2334234 00:28:05.820 22:14:16 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:05.820 22:14:16 -- nvmf/common.sh@470 -- # waitforlisten 2334234 00:28:05.820 22:14:16 -- common/autotest_common.sh@819 -- # '[' -z 2334234 ']' 00:28:05.820 22:14:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.820 22:14:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:05.820 22:14:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.820 22:14:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:05.820 22:14:16 -- common/autotest_common.sh@10 -- # set +x 00:28:05.820 [2024-07-26 22:14:16.952776] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:05.821 [2024-07-26 22:14:16.952836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.821 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.821 [2024-07-26 22:14:17.035708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:06.079 [2024-07-26 22:14:17.072778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:06.079 [2024-07-26 22:14:17.072892] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.079 [2024-07-26 22:14:17.072902] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.079 [2024-07-26 22:14:17.072910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.079 [2024-07-26 22:14:17.073018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.079 [2024-07-26 22:14:17.073101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.079 [2024-07-26 22:14:17.073102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.647 22:14:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:06.647 22:14:17 -- common/autotest_common.sh@852 -- # return 0 00:28:06.647 22:14:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:06.647 22:14:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:06.647 22:14:17 -- common/autotest_common.sh@10 -- # set +x 00:28:06.647 22:14:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:06.647 22:14:17 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:06.906 [2024-07-26 22:14:17.974797] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19eccd0/0x19f11c0) succeed. 00:28:06.906 [2024-07-26 22:14:17.984783] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19ee220/0x1a32850) succeed. 00:28:06.906 22:14:18 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:07.165 Malloc0 00:28:07.165 22:14:18 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:07.424 22:14:18 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:07.424 22:14:18 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:07.683 [2024-07-26 22:14:18.777329] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:07.683 22:14:18 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:07.941 [2024-07-26 22:14:18.949612] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:07.941 22:14:18 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:07.941 [2024-07-26 22:14:19.126252] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:07.941 22:14:19 -- host/failover.sh@31 -- # bdevperf_pid=2334563 00:28:07.941 22:14:19 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:07.941 22:14:19 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:07.941 22:14:19 -- host/failover.sh@34 -- # waitforlisten 2334563 /var/tmp/bdevperf.sock 00:28:07.941 22:14:19 -- common/autotest_common.sh@819 -- # '[' -z 2334563 ']' 00:28:07.941 22:14:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:07.941 22:14:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:07.941 22:14:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:07.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:07.941 22:14:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:07.941 22:14:19 -- common/autotest_common.sh@10 -- # set +x 00:28:08.876 22:14:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:08.876 22:14:19 -- common/autotest_common.sh@852 -- # return 0 00:28:08.876 22:14:19 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.134 NVMe0n1 00:28:09.134 22:14:20 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:09.392 00:28:09.392 22:14:20 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:09.392 22:14:20 -- host/failover.sh@39 -- # run_test_pid=2334815 00:28:09.392 22:14:20 -- host/failover.sh@41 -- # sleep 1 00:28:10.326 22:14:21 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:10.584 22:14:21 -- host/failover.sh@45 -- # sleep 3 00:28:13.867 22:14:24 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:13.867 00:28:13.867 22:14:24 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:13.867 22:14:25 -- host/failover.sh@50 -- # sleep 3 00:28:17.151 22:14:28 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:17.151 [2024-07-26 22:14:28.230817] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:17.151 22:14:28 -- host/failover.sh@55 -- # sleep 1 00:28:18.086 22:14:29 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:18.344 22:14:29 -- host/failover.sh@59 -- # wait 2334815 00:28:24.919 0 00:28:24.919 22:14:35 -- host/failover.sh@61 -- # killprocess 2334563 00:28:24.919 22:14:35 -- common/autotest_common.sh@926 -- # '[' -z 2334563 ']' 00:28:24.919 22:14:35 -- common/autotest_common.sh@930 -- # kill -0 2334563 00:28:24.919 22:14:35 -- common/autotest_common.sh@931 -- # uname 00:28:24.919 22:14:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.919 22:14:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2334563 00:28:24.919 22:14:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:24.919 22:14:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:24.919 22:14:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2334563' 00:28:24.919 killing process with pid 2334563 00:28:24.919 22:14:35 -- common/autotest_common.sh@945 -- # kill 2334563 00:28:24.919 22:14:35 -- common/autotest_common.sh@950 -- # wait 2334563 00:28:24.919 22:14:35 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.919 [2024-07-26 22:14:19.183364] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:24.919 [2024-07-26 22:14:19.183424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2334563 ] 00:28:24.919 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.919 [2024-07-26 22:14:19.268311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.919 [2024-07-26 22:14:19.305089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.919 Running I/O for 15 seconds... 00:28:24.919 [2024-07-26 22:14:22.628848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.628890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.628911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.628921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.628934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.628943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.628955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.628964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.628976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.628985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.628996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x180800 00:28:24.920 [2024-07-26 22:14:22.629597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.920 [2024-07-26 22:14:22.629616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183d00 00:28:24.920 [2024-07-26 22:14:22.629640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.920 [2024-07-26 22:14:22.629651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.629679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.629699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.629719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.629740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.629759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.629799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.629862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.629902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.629922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.629962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.629982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.629992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.630022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.630088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:86568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.630247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x180800 00:28:24.921 [2024-07-26 22:14:22.630289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.630329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183d00 00:28:24.921 [2024-07-26 22:14:22.630388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.921 [2024-07-26 22:14:22.630398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.921 [2024-07-26 22:14:22.630408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.630951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.630972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.630983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.630994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.631014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.631034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.631053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.631073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183d00 00:28:24.922 [2024-07-26 22:14:22.631093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.631113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.922 [2024-07-26 22:14:22.631134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.922 [2024-07-26 22:14:22.631145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013884080 len:0x1000 key:0x180800 00:28:24.922 [2024-07-26 22:14:22.631154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:22.631174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:22.631255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x180800 00:28:24.923 [2024-07-26 22:14:22.631336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:22.631356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:22.631376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:22.631395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:22.631415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:22.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.631446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:22.631455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.633254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:24.923 [2024-07-26 22:14:22.633267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:24.923 [2024-07-26 22:14:22.633276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86200 len:8 PRP1 0x0 PRP2 0x0 00:28:24.923 [2024-07-26 22:14:22.633286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:22.633326] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:24.923 [2024-07-26 22:14:22.633342] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:24.923 [2024-07-26 22:14:22.633353] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.923 [2024-07-26 22:14:22.635219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.923 [2024-07-26 22:14:22.649540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:24.923 [2024-07-26 22:14:22.679276] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:24.923 [2024-07-26 22:14:26.058196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013898a80 len:0x1000 key:0x181600 00:28:24.923 [2024-07-26 22:14:26.058267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x181600 00:28:24.923 [2024-07-26 22:14:26.058399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x181600 00:28:24.923 [2024-07-26 22:14:26.058441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x181600 00:28:24.923 [2024-07-26 22:14:26.058524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183d00 00:28:24.923 [2024-07-26 22:14:26.058544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.923 [2024-07-26 22:14:26.058598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.923 [2024-07-26 22:14:26.058608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.924 [2024-07-26 22:14:26.058632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.924 [2024-07-26 22:14:26.058654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.924 [2024-07-26 22:14:26.058674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c6080 len:0x1000 key:0x181600 00:28:24.924 [2024-07-26 22:14:26.058695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.924 [2024-07-26 22:14:26.058717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183d00 00:28:24.924 [2024-07-26 22:14:26.058738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x181600 00:28:24.924 [2024-07-26 22:14:26.058758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.924 [2024-07-26 22:14:26.058769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.924 [2024-07-26 22:14:26.058778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.058799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.058820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.058840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.058860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.058880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.058902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.058923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.058944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.058964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.058984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.058994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.059004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.059024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.059044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013887200 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.059149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.059231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.925 [2024-07-26 22:14:26.059417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x181600 00:28:24.925 [2024-07-26 22:14:26.059441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.925 [2024-07-26 22:14:26.059451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183d00 00:28:24.925 [2024-07-26 22:14:26.059461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.059686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.059706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:46696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.059810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.059873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.059953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.059973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.059984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.059994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.060014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.060034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183d00 00:28:24.926 [2024-07-26 22:14:26.060054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ec300 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.060075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.060096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.060117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.060138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.060158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x181600 00:28:24.926 [2024-07-26 22:14:26.060178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.060198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.926 [2024-07-26 22:14:26.060209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.926 [2024-07-26 22:14:26.060218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dec80 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:26.060749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x181600 00:28:24.927 [2024-07-26 22:14:26.060812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.060842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.927 [2024-07-26 22:14:26.060852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.062553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:24.927 [2024-07-26 22:14:26.062567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:24.927 [2024-07-26 22:14:26.062575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47752 len:8 PRP1 0x0 PRP2 0x0 00:28:24.927 [2024-07-26 22:14:26.062585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:26.062628] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:24.927 [2024-07-26 22:14:26.062640] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:28:24.927 [2024-07-26 22:14:26.062650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.927 [2024-07-26 22:14:26.064253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.927 [2024-07-26 22:14:26.078289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:24.927 [2024-07-26 22:14:26.110583] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:24.927 [2024-07-26 22:14:30.425271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183d00 00:28:24.927 [2024-07-26 22:14:30.425313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.927 [2024-07-26 22:14:30.425337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x180800 00:28:24.927 [2024-07-26 22:14:30.425347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.425433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.425473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.425553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.425720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.425740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013877a80 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b4800 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.425942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.425962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.425981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.425994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183d00 00:28:24.928 [2024-07-26 22:14:30.426003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.426014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.426023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.426034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.426043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.426054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x180800 00:28:24.928 [2024-07-26 22:14:30.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.426075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.928 [2024-07-26 22:14:30.426084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.928 [2024-07-26 22:14:30.426094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013895900 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d5800 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183d00 00:28:24.929 [2024-07-26 22:14:30.426634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x180800 00:28:24.929 [2024-07-26 22:14:30.426714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.929 [2024-07-26 22:14:30.426734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.929 [2024-07-26 22:14:30.426745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.426754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.426774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013889300 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.426794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.426816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.426837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.426858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.426878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.426898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.426919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.426958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.426978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.426989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.426998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:75136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.427059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.427199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.427220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e6000 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.427242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.427282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.427345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.427366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.427405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.427426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x180800 00:28:24.930 [2024-07-26 22:14:30.427446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.930 [2024-07-26 22:14:30.427466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183d00 00:28:24.930 [2024-07-26 22:14:30.427485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.930 [2024-07-26 22:14:30.427496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x180800 00:28:24.931 [2024-07-26 22:14:30.427505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183d00 00:28:24.931 [2024-07-26 22:14:30.427525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x180800 00:28:24.931 [2024-07-26 22:14:30.427628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183d00 00:28:24.931 [2024-07-26 22:14:30.427687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x180800 00:28:24.931 [2024-07-26 22:14:30.427728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183d00 00:28:24.931 [2024-07-26 22:14:30.427748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x180800 00:28:24.931 [2024-07-26 22:14:30.427809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x180800 00:28:24.931 [2024-07-26 22:14:30.427829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183d00 00:28:24.931 [2024-07-26 22:14:30.427849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.427879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.931 [2024-07-26 22:14:30.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f3a77000 sqhd:5310 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.429845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:24.931 [2024-07-26 22:14:30.429860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:24.931 [2024-07-26 22:14:30.429869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74680 len:8 PRP1 0x0 PRP2 0x0 00:28:24.931 [2024-07-26 22:14:30.429879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.931 [2024-07-26 22:14:30.429922] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:24.931 [2024-07-26 22:14:30.429933] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:28:24.931 [2024-07-26 22:14:30.429944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.931 [2024-07-26 22:14:30.431693] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.931 [2024-07-26 22:14:30.445556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:24.931 [2024-07-26 22:14:30.483065] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:24.931 00:28:24.931 Latency(us) 00:28:24.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.931 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:24.931 Verification LBA range: start 0x0 length 0x4000 00:28:24.931 NVMe0n1 : 15.00 19980.27 78.05 311.71 0.00 6295.41 335.87 1020054.73 00:28:24.931 =================================================================================================================== 00:28:24.931 Total : 19980.27 78.05 311.71 0.00 6295.41 335.87 1020054.73 00:28:24.931 Received shutdown signal, test time was about 15.000000 seconds 00:28:24.931 00:28:24.931 Latency(us) 00:28:24.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.931 =================================================================================================================== 00:28:24.931 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.931 22:14:35 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:24.931 22:14:35 -- host/failover.sh@65 -- # count=3 00:28:24.931 22:14:35 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:24.931 22:14:35 -- host/failover.sh@73 -- # bdevperf_pid=2337505 00:28:24.931 22:14:35 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:24.931 22:14:35 -- host/failover.sh@75 -- # waitforlisten 2337505 /var/tmp/bdevperf.sock 00:28:24.931 22:14:35 -- common/autotest_common.sh@819 -- # '[' -z 2337505 ']' 00:28:24.931 22:14:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:24.931 22:14:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:24.931 22:14:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:24.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:24.931 22:14:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:24.931 22:14:35 -- common/autotest_common.sh@10 -- # set +x 00:28:25.530 22:14:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:25.530 22:14:36 -- common/autotest_common.sh@852 -- # return 0 00:28:25.530 22:14:36 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:25.788 [2024-07-26 22:14:36.848802] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:25.788 22:14:36 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:26.047 [2024-07-26 22:14:37.025396] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:26.047 22:14:37 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.305 NVMe0n1 00:28:26.305 22:14:37 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.305 00:28:26.563 22:14:37 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:26.563 00:28:26.820 22:14:37 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:26.820 22:14:37 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:26.820 22:14:37 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:27.078 22:14:38 -- host/failover.sh@87 -- # sleep 3 00:28:30.360 22:14:41 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:30.360 22:14:41 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:30.360 22:14:41 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:30.360 22:14:41 -- host/failover.sh@90 -- # run_test_pid=2338332 00:28:30.360 22:14:41 -- host/failover.sh@92 -- # wait 2338332 00:28:31.295 0 00:28:31.295 22:14:42 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:31.295 [2024-07-26 22:14:35.899294] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:31.295 [2024-07-26 22:14:35.899353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2337505 ] 00:28:31.295 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.295 [2024-07-26 22:14:35.985078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.295 [2024-07-26 22:14:36.017792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.295 [2024-07-26 22:14:38.128166] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:31.295 [2024-07-26 22:14:38.128698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:31.295 [2024-07-26 22:14:38.128726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:31.295 [2024-07-26 22:14:38.143644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:31.295 [2024-07-26 22:14:38.159857] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:31.295 Running I/O for 1 seconds... 00:28:31.295 00:28:31.295 Latency(us) 00:28:31.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.295 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:31.295 Verification LBA range: start 0x0 length 0x4000 00:28:31.295 NVMe0n1 : 1.00 25113.54 98.10 0.00 0.00 5072.55 1232.08 11429.48 00:28:31.295 =================================================================================================================== 00:28:31.295 Total : 25113.54 98.10 0.00 0.00 5072.55 1232.08 11429.48 00:28:31.295 22:14:42 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:31.295 22:14:42 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:31.553 22:14:42 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:31.812 22:14:42 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:31.812 22:14:42 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:31.812 22:14:42 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:32.070 22:14:43 -- host/failover.sh@101 -- # sleep 3 00:28:35.351 22:14:46 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:35.351 22:14:46 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:35.351 22:14:46 -- host/failover.sh@108 -- # killprocess 2337505 00:28:35.351 22:14:46 -- common/autotest_common.sh@926 -- # '[' -z 2337505 ']' 00:28:35.351 22:14:46 -- common/autotest_common.sh@930 -- # kill -0 2337505 00:28:35.351 22:14:46 -- common/autotest_common.sh@931 -- # uname 00:28:35.351 22:14:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:35.351 22:14:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2337505 00:28:35.351 22:14:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:35.351 22:14:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:35.351 22:14:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2337505' 00:28:35.351 killing process with pid 2337505 00:28:35.351 22:14:46 -- common/autotest_common.sh@945 -- # kill 2337505 00:28:35.351 22:14:46 -- common/autotest_common.sh@950 -- # wait 2337505 00:28:35.351 22:14:46 -- host/failover.sh@110 -- # sync 00:28:35.351 22:14:46 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:35.609 22:14:46 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:35.609 22:14:46 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:35.609 22:14:46 -- host/failover.sh@116 -- # nvmftestfini 00:28:35.609 22:14:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:35.609 22:14:46 -- nvmf/common.sh@116 -- # sync 00:28:35.609 22:14:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:35.609 22:14:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:35.609 22:14:46 -- nvmf/common.sh@119 -- # set +e 00:28:35.609 22:14:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:35.609 22:14:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:35.609 rmmod nvme_rdma 00:28:35.609 rmmod nvme_fabrics 00:28:35.609 22:14:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:35.609 22:14:46 -- nvmf/common.sh@123 -- # set -e 00:28:35.609 22:14:46 -- nvmf/common.sh@124 -- # return 0 00:28:35.609 22:14:46 -- nvmf/common.sh@477 -- # '[' -n 2334234 ']' 00:28:35.609 22:14:46 -- nvmf/common.sh@478 -- # killprocess 2334234 00:28:35.609 22:14:46 -- common/autotest_common.sh@926 -- # '[' -z 2334234 ']' 00:28:35.609 22:14:46 -- common/autotest_common.sh@930 -- # kill -0 2334234 00:28:35.609 22:14:46 -- common/autotest_common.sh@931 -- # uname 00:28:35.610 22:14:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:35.610 22:14:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2334234 00:28:35.868 22:14:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:35.868 22:14:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:35.868 22:14:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2334234' 00:28:35.868 killing process with pid 2334234 00:28:35.868 22:14:46 -- common/autotest_common.sh@945 -- # kill 2334234 00:28:35.868 22:14:46 -- common/autotest_common.sh@950 -- # wait 2334234 00:28:36.127 22:14:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:36.127 22:14:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:36.127 00:28:36.127 real 0m38.534s 00:28:36.127 user 2m3.207s 00:28:36.127 sys 0m8.545s 00:28:36.127 22:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.127 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:36.127 ************************************ 00:28:36.127 END TEST nvmf_failover 00:28:36.127 ************************************ 00:28:36.127 22:14:47 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:36.127 22:14:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:36.127 22:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:36.127 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:36.127 ************************************ 00:28:36.127 START TEST nvmf_discovery 00:28:36.127 ************************************ 00:28:36.127 22:14:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:36.127 * Looking for test storage... 00:28:36.127 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:36.127 22:14:47 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.127 22:14:47 -- nvmf/common.sh@7 -- # uname -s 00:28:36.127 22:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.127 22:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.127 22:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.127 22:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.127 22:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.127 22:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.127 22:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.127 22:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.127 22:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.127 22:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.127 22:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:36.127 22:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:36.127 22:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.127 22:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.127 22:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.127 22:14:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:36.127 22:14:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.127 22:14:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.127 22:14:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.127 22:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.127 22:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.127 22:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.127 22:14:47 -- paths/export.sh@5 -- # export PATH 00:28:36.128 22:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.128 22:14:47 -- nvmf/common.sh@46 -- # : 0 00:28:36.128 22:14:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:36.128 22:14:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:36.128 22:14:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:36.128 22:14:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.128 22:14:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.128 22:14:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:36.128 22:14:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:36.128 22:14:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:36.128 22:14:47 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:36.128 22:14:47 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:36.128 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:36.128 22:14:47 -- host/discovery.sh@13 -- # exit 0 00:28:36.128 00:28:36.128 real 0m0.122s 00:28:36.128 user 0m0.045s 00:28:36.128 sys 0m0.087s 00:28:36.128 22:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.128 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:36.128 ************************************ 00:28:36.128 END TEST nvmf_discovery 00:28:36.128 ************************************ 00:28:36.128 22:14:47 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:36.128 22:14:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:36.128 22:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:36.128 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:36.387 ************************************ 00:28:36.387 START TEST nvmf_discovery_remove_ifc 00:28:36.387 ************************************ 00:28:36.387 22:14:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:36.387 * Looking for test storage... 00:28:36.387 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:36.387 22:14:47 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.387 22:14:47 -- nvmf/common.sh@7 -- # uname -s 00:28:36.387 22:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.387 22:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.387 22:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.387 22:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.387 22:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.387 22:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.387 22:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.387 22:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.387 22:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.387 22:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.387 22:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:36.387 22:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:36.387 22:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.387 22:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.387 22:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.387 22:14:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:36.387 22:14:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.387 22:14:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.387 22:14:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.387 22:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.388 22:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.388 22:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.388 22:14:47 -- paths/export.sh@5 -- # export PATH 00:28:36.388 22:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.388 22:14:47 -- nvmf/common.sh@46 -- # : 0 00:28:36.388 22:14:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:36.388 22:14:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:36.388 22:14:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:36.388 22:14:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.388 22:14:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.388 22:14:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:36.388 22:14:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:36.388 22:14:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:36.388 22:14:47 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:36.388 22:14:47 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:36.388 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:36.388 22:14:47 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:36.388 00:28:36.388 real 0m0.132s 00:28:36.388 user 0m0.055s 00:28:36.388 sys 0m0.087s 00:28:36.388 22:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:36.388 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:36.388 ************************************ 00:28:36.388 END TEST nvmf_discovery_remove_ifc 00:28:36.388 ************************************ 00:28:36.388 22:14:47 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:36.388 22:14:47 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:36.388 22:14:47 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:36.388 22:14:47 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:36.388 22:14:47 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:36.388 22:14:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:36.388 22:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:36.388 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:36.388 ************************************ 00:28:36.388 START TEST nvmf_bdevperf 00:28:36.388 ************************************ 00:28:36.388 22:14:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:36.646 * Looking for test storage... 00:28:36.646 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:36.646 22:14:47 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:36.646 22:14:47 -- nvmf/common.sh@7 -- # uname -s 00:28:36.646 22:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.646 22:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.646 22:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.646 22:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.646 22:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.646 22:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.646 22:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.646 22:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.646 22:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.646 22:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.646 22:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:36.646 22:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:36.646 22:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.646 22:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.646 22:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.646 22:14:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:36.646 22:14:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.646 22:14:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.646 22:14:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.646 22:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.646 22:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.646 22:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.646 22:14:47 -- paths/export.sh@5 -- # export PATH 00:28:36.646 22:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.646 22:14:47 -- nvmf/common.sh@46 -- # : 0 00:28:36.646 22:14:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:36.646 22:14:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:36.646 22:14:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:36.646 22:14:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.646 22:14:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.646 22:14:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:36.647 22:14:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:36.647 22:14:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:36.647 22:14:47 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:36.647 22:14:47 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:36.647 22:14:47 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:36.647 22:14:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:36.647 22:14:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.647 22:14:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:36.647 22:14:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:36.647 22:14:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:36.647 22:14:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.647 22:14:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:36.647 22:14:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.647 22:14:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:36.647 22:14:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:36.647 22:14:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:36.647 22:14:47 -- common/autotest_common.sh@10 -- # set +x 00:28:44.755 22:14:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:44.755 22:14:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:44.755 22:14:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:44.755 22:14:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:44.755 22:14:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:44.755 22:14:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:44.755 22:14:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:44.755 22:14:54 -- nvmf/common.sh@294 -- # net_devs=() 00:28:44.755 22:14:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:44.755 22:14:54 -- nvmf/common.sh@295 -- # e810=() 00:28:44.755 22:14:54 -- nvmf/common.sh@295 -- # local -ga e810 00:28:44.755 22:14:54 -- nvmf/common.sh@296 -- # x722=() 00:28:44.755 22:14:54 -- nvmf/common.sh@296 -- # local -ga x722 00:28:44.755 22:14:54 -- nvmf/common.sh@297 -- # mlx=() 00:28:44.755 22:14:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:44.755 22:14:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:44.755 22:14:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:44.755 22:14:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:44.755 22:14:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:44.755 22:14:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:44.755 22:14:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:44.755 22:14:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:44.755 22:14:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:44.755 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:44.755 22:14:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:44.755 22:14:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:44.755 22:14:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:44.755 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:44.755 22:14:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:44.755 22:14:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:44.755 22:14:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:44.755 22:14:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.755 22:14:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:44.755 22:14:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.755 22:14:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:44.755 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:44.755 22:14:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.755 22:14:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:44.755 22:14:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:44.755 22:14:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:44.755 22:14:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:44.755 22:14:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:44.755 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:44.755 22:14:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:44.755 22:14:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:44.755 22:14:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:44.755 22:14:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:44.755 22:14:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:44.755 22:14:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:44.755 22:14:54 -- nvmf/common.sh@57 -- # uname 00:28:44.755 22:14:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:44.755 22:14:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:44.755 22:14:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:44.755 22:14:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:44.755 22:14:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:44.755 22:14:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:44.755 22:14:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:44.755 22:14:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:44.755 22:14:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:44.755 22:14:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:44.755 22:14:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:44.755 22:14:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:44.755 22:14:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:44.755 22:14:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:44.756 22:14:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:44.756 22:14:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:44.756 22:14:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@104 -- # continue 2 00:28:44.756 22:14:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@104 -- # continue 2 00:28:44.756 22:14:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:44.756 22:14:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:44.756 22:14:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:44.756 22:14:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:44.756 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:44.756 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:44.756 altname enp217s0f0np0 00:28:44.756 altname ens818f0np0 00:28:44.756 inet 192.168.100.8/24 scope global mlx_0_0 00:28:44.756 valid_lft forever preferred_lft forever 00:28:44.756 22:14:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:44.756 22:14:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:44.756 22:14:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:44.756 22:14:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:44.756 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:44.756 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:44.756 altname enp217s0f1np1 00:28:44.756 altname ens818f1np1 00:28:44.756 inet 192.168.100.9/24 scope global mlx_0_1 00:28:44.756 valid_lft forever preferred_lft forever 00:28:44.756 22:14:55 -- nvmf/common.sh@410 -- # return 0 00:28:44.756 22:14:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:44.756 22:14:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:44.756 22:14:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:44.756 22:14:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:44.756 22:14:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:44.756 22:14:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:44.756 22:14:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:44.756 22:14:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:44.756 22:14:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:44.756 22:14:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@104 -- # continue 2 00:28:44.756 22:14:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:44.756 22:14:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:44.756 22:14:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@104 -- # continue 2 00:28:44.756 22:14:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:44.756 22:14:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:44.756 22:14:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:44.756 22:14:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:44.756 22:14:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:44.756 22:14:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:44.756 192.168.100.9' 00:28:44.756 22:14:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:44.756 192.168.100.9' 00:28:44.756 22:14:55 -- nvmf/common.sh@445 -- # head -n 1 00:28:44.756 22:14:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:44.756 22:14:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:44.756 192.168.100.9' 00:28:44.756 22:14:55 -- nvmf/common.sh@446 -- # tail -n +2 00:28:44.756 22:14:55 -- nvmf/common.sh@446 -- # head -n 1 00:28:44.756 22:14:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:44.756 22:14:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:44.756 22:14:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:44.756 22:14:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:44.756 22:14:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:44.756 22:14:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:44.756 22:14:55 -- host/bdevperf.sh@25 -- # tgt_init 00:28:44.756 22:14:55 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:44.756 22:14:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:44.756 22:14:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:44.756 22:14:55 -- common/autotest_common.sh@10 -- # set +x 00:28:44.756 22:14:55 -- nvmf/common.sh@469 -- # nvmfpid=2343424 00:28:44.756 22:14:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:44.756 22:14:55 -- nvmf/common.sh@470 -- # waitforlisten 2343424 00:28:44.756 22:14:55 -- common/autotest_common.sh@819 -- # '[' -z 2343424 ']' 00:28:44.756 22:14:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:44.756 22:14:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:44.756 22:14:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:44.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:44.756 22:14:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:44.756 22:14:55 -- common/autotest_common.sh@10 -- # set +x 00:28:44.756 [2024-07-26 22:14:55.248733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:44.756 [2024-07-26 22:14:55.248779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:44.756 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.756 [2024-07-26 22:14:55.334582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:44.756 [2024-07-26 22:14:55.371653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:44.756 [2024-07-26 22:14:55.371767] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:44.756 [2024-07-26 22:14:55.371777] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:44.756 [2024-07-26 22:14:55.371786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:44.756 [2024-07-26 22:14:55.371838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.756 [2024-07-26 22:14:55.371908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:44.756 [2024-07-26 22:14:55.371911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.014 22:14:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:45.014 22:14:56 -- common/autotest_common.sh@852 -- # return 0 00:28:45.014 22:14:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:45.014 22:14:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:45.014 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:28:45.014 22:14:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.014 22:14:56 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:45.014 22:14:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.014 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:28:45.014 [2024-07-26 22:14:56.113095] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb1fcd0/0xb241c0) succeed. 00:28:45.014 [2024-07-26 22:14:56.123452] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb21220/0xb65850) succeed. 00:28:45.014 22:14:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.014 22:14:56 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:45.014 22:14:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.014 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 Malloc0 00:28:45.272 22:14:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.272 22:14:56 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:45.272 22:14:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.272 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 22:14:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.272 22:14:56 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:45.272 22:14:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.272 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 22:14:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.272 22:14:56 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:45.272 22:14:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.272 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:28:45.272 [2024-07-26 22:14:56.273365] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:45.272 22:14:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.272 22:14:56 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:45.272 22:14:56 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:45.272 22:14:56 -- nvmf/common.sh@520 -- # config=() 00:28:45.272 22:14:56 -- nvmf/common.sh@520 -- # local subsystem config 00:28:45.272 22:14:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:45.272 22:14:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:45.272 { 00:28:45.273 "params": { 00:28:45.273 "name": "Nvme$subsystem", 00:28:45.273 "trtype": "$TEST_TRANSPORT", 00:28:45.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.273 "adrfam": "ipv4", 00:28:45.273 "trsvcid": "$NVMF_PORT", 00:28:45.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.273 "hdgst": ${hdgst:-false}, 00:28:45.273 "ddgst": ${ddgst:-false} 00:28:45.273 }, 00:28:45.273 "method": "bdev_nvme_attach_controller" 00:28:45.273 } 00:28:45.273 EOF 00:28:45.273 )") 00:28:45.273 22:14:56 -- nvmf/common.sh@542 -- # cat 00:28:45.273 22:14:56 -- nvmf/common.sh@544 -- # jq . 00:28:45.273 22:14:56 -- nvmf/common.sh@545 -- # IFS=, 00:28:45.273 22:14:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:45.273 "params": { 00:28:45.273 "name": "Nvme1", 00:28:45.273 "trtype": "rdma", 00:28:45.273 "traddr": "192.168.100.8", 00:28:45.273 "adrfam": "ipv4", 00:28:45.273 "trsvcid": "4420", 00:28:45.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.273 "hdgst": false, 00:28:45.273 "ddgst": false 00:28:45.273 }, 00:28:45.273 "method": "bdev_nvme_attach_controller" 00:28:45.273 }' 00:28:45.273 [2024-07-26 22:14:56.325332] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:45.273 [2024-07-26 22:14:56.325382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343552 ] 00:28:45.273 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.273 [2024-07-26 22:14:56.409905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.273 [2024-07-26 22:14:56.446433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.531 Running I/O for 1 seconds... 00:28:46.465 00:28:46.465 Latency(us) 00:28:46.465 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.465 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:46.465 Verification LBA range: start 0x0 length 0x4000 00:28:46.465 Nvme1n1 : 1.00 25210.98 98.48 0.00 0.00 5054.19 871.63 12215.91 00:28:46.465 =================================================================================================================== 00:28:46.465 Total : 25210.98 98.48 0.00 0.00 5054.19 871.63 12215.91 00:28:46.724 22:14:57 -- host/bdevperf.sh@30 -- # bdevperfpid=2343788 00:28:46.724 22:14:57 -- host/bdevperf.sh@32 -- # sleep 3 00:28:46.724 22:14:57 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:46.724 22:14:57 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:46.724 22:14:57 -- nvmf/common.sh@520 -- # config=() 00:28:46.724 22:14:57 -- nvmf/common.sh@520 -- # local subsystem config 00:28:46.724 22:14:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:46.724 22:14:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:46.724 { 00:28:46.724 "params": { 00:28:46.724 "name": "Nvme$subsystem", 00:28:46.724 "trtype": "$TEST_TRANSPORT", 00:28:46.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.724 "adrfam": "ipv4", 00:28:46.724 "trsvcid": "$NVMF_PORT", 00:28:46.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.724 "hdgst": ${hdgst:-false}, 00:28:46.724 "ddgst": ${ddgst:-false} 00:28:46.724 }, 00:28:46.724 "method": "bdev_nvme_attach_controller" 00:28:46.724 } 00:28:46.724 EOF 00:28:46.724 )") 00:28:46.724 22:14:57 -- nvmf/common.sh@542 -- # cat 00:28:46.724 22:14:57 -- nvmf/common.sh@544 -- # jq . 00:28:46.724 22:14:57 -- nvmf/common.sh@545 -- # IFS=, 00:28:46.724 22:14:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:46.724 "params": { 00:28:46.724 "name": "Nvme1", 00:28:46.724 "trtype": "rdma", 00:28:46.724 "traddr": "192.168.100.8", 00:28:46.724 "adrfam": "ipv4", 00:28:46.724 "trsvcid": "4420", 00:28:46.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:46.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:46.724 "hdgst": false, 00:28:46.724 "ddgst": false 00:28:46.724 }, 00:28:46.724 "method": "bdev_nvme_attach_controller" 00:28:46.724 }' 00:28:46.724 [2024-07-26 22:14:57.863188] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:46.724 [2024-07-26 22:14:57.863240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2343788 ] 00:28:46.724 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.724 [2024-07-26 22:14:57.946664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.982 [2024-07-26 22:14:57.982572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.982 Running I/O for 15 seconds... 00:28:50.305 22:15:00 -- host/bdevperf.sh@33 -- # kill -9 2343424 00:28:50.305 22:15:00 -- host/bdevperf.sh@35 -- # sleep 3 00:28:50.873 [2024-07-26 22:15:01.853015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ce480 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.873 [2024-07-26 22:15:01.853877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183d00 00:28:50.873 [2024-07-26 22:15:01.853896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.873 [2024-07-26 22:15:01.853907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x181600 00:28:50.873 [2024-07-26 22:15:01.853916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.853926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.853935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.853946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.853955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.853965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.853974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.853985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.853993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388c480 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.854915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.854933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.854965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.854990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.855097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.855116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.855135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.855154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.855173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.874 [2024-07-26 22:15:01.855271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f0500 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.855291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ef480 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.855311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183d00 00:28:50.874 [2024-07-26 22:15:01.855331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.874 [2024-07-26 22:15:01.855342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x181600 00:28:50.874 [2024-07-26 22:15:01.855351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x181600 00:28:50.875 [2024-07-26 22:15:01.855427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.875 [2024-07-26 22:15:01.855451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x181600 00:28:50.875 [2024-07-26 22:15:01.855470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x181600 00:28:50.875 [2024-07-26 22:15:01.855509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.875 [2024-07-26 22:15:01.855528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e1e00 len:0x1000 key:0x181600 00:28:50.875 [2024-07-26 22:15:01.855566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x181600 00:28:50.875 [2024-07-26 22:15:01.855585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183d00 00:28:50.875 [2024-07-26 22:15:01.855629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.855639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:50.875 [2024-07-26 22:15:01.855648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:5d959000 sqhd:5310 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.857655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:50.875 [2024-07-26 22:15:01.857671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:50.875 [2024-07-26 22:15:01.857680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16688 len:8 PRP1 0x0 PRP2 0x0 00:28:50.875 [2024-07-26 22:15:01.857691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:50.875 [2024-07-26 22:15:01.857732] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:50.875 [2024-07-26 22:15:01.859718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.875 [2024-07-26 22:15:01.873877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:50.875 [2024-07-26 22:15:01.876678] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:50.875 [2024-07-26 22:15:01.876696] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:50.875 [2024-07-26 22:15:01.876711] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:51.808 [2024-07-26 22:15:02.880595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:51.808 [2024-07-26 22:15:02.880617] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:51.808 [2024-07-26 22:15:02.880723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:51.808 [2024-07-26 22:15:02.880734] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:51.808 [2024-07-26 22:15:02.880744] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:51.808 [2024-07-26 22:15:02.881753] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:51.808 [2024-07-26 22:15:02.882454] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:51.808 [2024-07-26 22:15:02.893762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:51.808 [2024-07-26 22:15:02.895953] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:51.808 [2024-07-26 22:15:02.895974] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:51.808 [2024-07-26 22:15:02.895982] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:52.742 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2343424 Killed "${NVMF_APP[@]}" "$@" 00:28:52.742 22:15:03 -- host/bdevperf.sh@36 -- # tgt_init 00:28:52.742 22:15:03 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:52.742 22:15:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:52.742 22:15:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:52.742 22:15:03 -- common/autotest_common.sh@10 -- # set +x 00:28:52.742 22:15:03 -- nvmf/common.sh@469 -- # nvmfpid=2344967 00:28:52.742 22:15:03 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:52.742 22:15:03 -- nvmf/common.sh@470 -- # waitforlisten 2344967 00:28:52.742 22:15:03 -- common/autotest_common.sh@819 -- # '[' -z 2344967 ']' 00:28:52.742 22:15:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:52.742 22:15:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:52.742 22:15:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:52.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:52.742 22:15:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:52.742 22:15:03 -- common/autotest_common.sh@10 -- # set +x 00:28:52.742 [2024-07-26 22:15:03.882037] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:52.742 [2024-07-26 22:15:03.882092] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.742 [2024-07-26 22:15:03.899832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:52.742 [2024-07-26 22:15:03.899855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:52.742 [2024-07-26 22:15:03.899958] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:52.742 [2024-07-26 22:15:03.899969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:52.742 [2024-07-26 22:15:03.899979] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:52.742 [2024-07-26 22:15:03.901475] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:52.742 [2024-07-26 22:15:03.901664] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:52.742 [2024-07-26 22:15:03.913388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.742 [2024-07-26 22:15:03.915400] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:52.742 [2024-07-26 22:15:03.915420] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:52.742 [2024-07-26 22:15:03.915428] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:52.742 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.001 [2024-07-26 22:15:03.970347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:53.001 [2024-07-26 22:15:04.008600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:53.001 [2024-07-26 22:15:04.008707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.001 [2024-07-26 22:15:04.008717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.001 [2024-07-26 22:15:04.008726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.001 [2024-07-26 22:15:04.008767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.001 [2024-07-26 22:15:04.008868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:53.001 [2024-07-26 22:15:04.008870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.568 22:15:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:53.568 22:15:04 -- common/autotest_common.sh@852 -- # return 0 00:28:53.568 22:15:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:53.568 22:15:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:53.568 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:28:53.568 22:15:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:53.568 22:15:04 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:53.568 22:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.568 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:28:53.568 [2024-07-26 22:15:04.759745] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1788cd0/0x178d1c0) succeed. 00:28:53.568 [2024-07-26 22:15:04.769901] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x178a220/0x17ce850) succeed. 00:28:53.827 22:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.827 22:15:04 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:53.827 22:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.827 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:28:53.827 Malloc0 00:28:53.827 22:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.827 22:15:04 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.827 22:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.827 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:28:53.827 22:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.827 22:15:04 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:53.827 22:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.827 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:28:53.827 22:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.827 22:15:04 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:53.827 22:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.827 22:15:04 -- common/autotest_common.sh@10 -- # set +x 00:28:53.827 [2024-07-26 22:15:04.919370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:53.827 [2024-07-26 22:15:04.919402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:53.827 [2024-07-26 22:15:04.919505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:53.827 [2024-07-26 22:15:04.919515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:53.827 [2024-07-26 22:15:04.919525] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:53.827 [2024-07-26 22:15:04.919956] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:53.827 [2024-07-26 22:15:04.921263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.827 [2024-07-26 22:15:04.923190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:53.827 22:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.827 22:15:04 -- host/bdevperf.sh@38 -- # wait 2343788 00:28:53.827 [2024-07-26 22:15:04.951541] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:03.807 00:29:03.807 Latency(us) 00:29:03.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.807 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:03.807 Verification LBA range: start 0x0 length 0x4000 00:29:03.807 Nvme1n1 : 15.00 18499.94 72.27 16111.40 0.00 3686.54 517.73 1033476.51 00:29:03.807 =================================================================================================================== 00:29:03.807 Total : 18499.94 72.27 16111.40 0.00 3686.54 517.73 1033476.51 00:29:03.807 22:15:13 -- host/bdevperf.sh@39 -- # sync 00:29:03.807 22:15:13 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:03.807 22:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.807 22:15:13 -- common/autotest_common.sh@10 -- # set +x 00:29:03.807 22:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.807 22:15:13 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:03.807 22:15:13 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:03.807 22:15:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:03.807 22:15:13 -- nvmf/common.sh@116 -- # sync 00:29:03.807 22:15:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:03.807 22:15:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:03.807 22:15:13 -- nvmf/common.sh@119 -- # set +e 00:29:03.807 22:15:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:03.807 22:15:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:03.808 rmmod nvme_rdma 00:29:03.808 rmmod nvme_fabrics 00:29:03.808 22:15:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:03.808 22:15:13 -- nvmf/common.sh@123 -- # set -e 00:29:03.808 22:15:13 -- nvmf/common.sh@124 -- # return 0 00:29:03.808 22:15:13 -- nvmf/common.sh@477 -- # '[' -n 2344967 ']' 00:29:03.808 22:15:13 -- nvmf/common.sh@478 -- # killprocess 2344967 00:29:03.808 22:15:13 -- common/autotest_common.sh@926 -- # '[' -z 2344967 ']' 00:29:03.808 22:15:13 -- common/autotest_common.sh@930 -- # kill -0 2344967 00:29:03.808 22:15:13 -- common/autotest_common.sh@931 -- # uname 00:29:03.808 22:15:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.808 22:15:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2344967 00:29:03.808 22:15:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:03.808 22:15:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:03.808 22:15:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2344967' 00:29:03.808 killing process with pid 2344967 00:29:03.808 22:15:13 -- common/autotest_common.sh@945 -- # kill 2344967 00:29:03.808 22:15:13 -- common/autotest_common.sh@950 -- # wait 2344967 00:29:03.808 22:15:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:03.808 22:15:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:03.808 00:29:03.808 real 0m26.220s 00:29:03.808 user 1m4.197s 00:29:03.808 sys 0m6.920s 00:29:03.808 22:15:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.808 22:15:13 -- common/autotest_common.sh@10 -- # set +x 00:29:03.808 ************************************ 00:29:03.808 END TEST nvmf_bdevperf 00:29:03.808 ************************************ 00:29:03.808 22:15:13 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:03.808 22:15:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:03.808 22:15:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:03.808 22:15:13 -- common/autotest_common.sh@10 -- # set +x 00:29:03.808 ************************************ 00:29:03.808 START TEST nvmf_target_disconnect 00:29:03.808 ************************************ 00:29:03.808 22:15:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:03.808 * Looking for test storage... 00:29:03.808 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:03.808 22:15:13 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.808 22:15:13 -- nvmf/common.sh@7 -- # uname -s 00:29:03.808 22:15:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.808 22:15:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.808 22:15:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.808 22:15:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.808 22:15:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.808 22:15:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.808 22:15:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.808 22:15:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.808 22:15:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.808 22:15:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.808 22:15:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:03.808 22:15:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:03.808 22:15:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.808 22:15:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.808 22:15:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:03.808 22:15:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:03.808 22:15:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.808 22:15:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.808 22:15:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.808 22:15:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.808 22:15:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.808 22:15:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.808 22:15:13 -- paths/export.sh@5 -- # export PATH 00:29:03.808 22:15:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.808 22:15:13 -- nvmf/common.sh@46 -- # : 0 00:29:03.808 22:15:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:03.808 22:15:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:03.808 22:15:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:03.808 22:15:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.808 22:15:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.808 22:15:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:03.808 22:15:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:03.808 22:15:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:03.808 22:15:13 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:03.808 22:15:13 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:03.808 22:15:13 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:03.808 22:15:13 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:03.808 22:15:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:03.808 22:15:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.808 22:15:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:03.808 22:15:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:03.808 22:15:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:03.808 22:15:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:03.808 22:15:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:03.808 22:15:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.808 22:15:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:03.808 22:15:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:03.808 22:15:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:03.808 22:15:13 -- common/autotest_common.sh@10 -- # set +x 00:29:11.927 22:15:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:11.927 22:15:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:11.927 22:15:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:11.927 22:15:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:11.927 22:15:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:11.927 22:15:21 -- nvmf/common.sh@294 -- # net_devs=() 00:29:11.927 22:15:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@295 -- # e810=() 00:29:11.927 22:15:21 -- nvmf/common.sh@295 -- # local -ga e810 00:29:11.927 22:15:21 -- nvmf/common.sh@296 -- # x722=() 00:29:11.927 22:15:21 -- nvmf/common.sh@296 -- # local -ga x722 00:29:11.927 22:15:21 -- nvmf/common.sh@297 -- # mlx=() 00:29:11.927 22:15:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:11.927 22:15:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.927 22:15:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:11.927 22:15:21 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:11.927 22:15:21 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:11.927 22:15:21 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:11.927 22:15:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:11.927 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:11.927 22:15:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:11.927 22:15:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:11.927 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:11.927 22:15:21 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:11.927 22:15:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.927 22:15:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.927 22:15:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:11.927 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:11.927 22:15:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.927 22:15:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.927 22:15:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.927 22:15:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:11.927 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:11.927 22:15:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.927 22:15:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:11.927 22:15:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:11.927 22:15:21 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:11.927 22:15:21 -- nvmf/common.sh@57 -- # uname 00:29:11.927 22:15:21 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:11.927 22:15:21 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:11.927 22:15:21 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:11.927 22:15:21 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:11.927 22:15:21 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:11.927 22:15:21 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:11.927 22:15:21 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:11.927 22:15:21 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:11.927 22:15:21 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:11.927 22:15:21 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:11.927 22:15:21 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:11.927 22:15:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:11.927 22:15:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:11.927 22:15:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:11.927 22:15:21 -- nvmf/common.sh@104 -- # continue 2 00:29:11.927 22:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:11.927 22:15:21 -- nvmf/common.sh@104 -- # continue 2 00:29:11.927 22:15:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:11.927 22:15:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:11.927 22:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:11.927 22:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:11.927 22:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:11.927 22:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:11.927 22:15:21 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:11.927 22:15:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:11.927 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:11.927 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:11.927 altname enp217s0f0np0 00:29:11.927 altname ens818f0np0 00:29:11.927 inet 192.168.100.8/24 scope global mlx_0_0 00:29:11.927 valid_lft forever preferred_lft forever 00:29:11.927 22:15:21 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:11.927 22:15:21 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:11.927 22:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:11.927 22:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:11.927 22:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:11.927 22:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:11.927 22:15:21 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:11.927 22:15:21 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:11.927 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:11.927 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:11.927 altname enp217s0f1np1 00:29:11.927 altname ens818f1np1 00:29:11.927 inet 192.168.100.9/24 scope global mlx_0_1 00:29:11.927 valid_lft forever preferred_lft forever 00:29:11.927 22:15:21 -- nvmf/common.sh@410 -- # return 0 00:29:11.927 22:15:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:11.927 22:15:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:11.927 22:15:21 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:11.927 22:15:21 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:11.927 22:15:21 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:11.927 22:15:21 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:11.927 22:15:21 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:11.927 22:15:21 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:11.927 22:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:11.927 22:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:11.927 22:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:11.927 22:15:21 -- nvmf/common.sh@104 -- # continue 2 00:29:11.927 22:15:21 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:11.928 22:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:11.928 22:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:11.928 22:15:21 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:11.928 22:15:21 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:11.928 22:15:21 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:11.928 22:15:21 -- nvmf/common.sh@104 -- # continue 2 00:29:11.928 22:15:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:11.928 22:15:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:11.928 22:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:11.928 22:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:11.928 22:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:11.928 22:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:11.928 22:15:21 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:11.928 22:15:21 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:11.928 22:15:21 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:11.928 22:15:21 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:11.928 22:15:21 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:11.928 22:15:21 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:11.928 22:15:21 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:11.928 192.168.100.9' 00:29:11.928 22:15:21 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:11.928 192.168.100.9' 00:29:11.928 22:15:21 -- nvmf/common.sh@445 -- # head -n 1 00:29:11.928 22:15:22 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:11.928 22:15:22 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:11.928 192.168.100.9' 00:29:11.928 22:15:22 -- nvmf/common.sh@446 -- # tail -n +2 00:29:11.928 22:15:22 -- nvmf/common.sh@446 -- # head -n 1 00:29:11.928 22:15:22 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:11.928 22:15:22 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:11.928 22:15:22 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:11.928 22:15:22 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:11.928 22:15:22 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:11.928 22:15:22 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:11.928 22:15:22 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:11.928 22:15:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:11.928 22:15:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:11.928 22:15:22 -- common/autotest_common.sh@10 -- # set +x 00:29:11.928 ************************************ 00:29:11.928 START TEST nvmf_target_disconnect_tc1 00:29:11.928 ************************************ 00:29:11.928 22:15:22 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:11.928 22:15:22 -- host/target_disconnect.sh@32 -- # set +e 00:29:11.928 22:15:22 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:11.928 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.928 [2024-07-26 22:15:22.184512] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:11.928 [2024-07-26 22:15:22.184648] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:11.928 [2024-07-26 22:15:22.184679] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:29:12.186 [2024-07-26 22:15:23.188741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:12.186 [2024-07-26 22:15:23.188801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:12.186 [2024-07-26 22:15:23.188836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:29:12.186 [2024-07-26 22:15:23.188892] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:12.186 [2024-07-26 22:15:23.188921] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:12.186 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:29:12.186 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:12.186 Initializing NVMe Controllers 00:29:12.186 22:15:23 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:12.186 22:15:23 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:12.186 22:15:23 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:12.186 22:15:23 -- common/autotest_common.sh@1132 -- # return 0 00:29:12.186 22:15:23 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:12.186 22:15:23 -- host/target_disconnect.sh@41 -- # set -e 00:29:12.186 00:29:12.186 real 0m1.146s 00:29:12.186 user 0m0.875s 00:29:12.186 sys 0m0.260s 00:29:12.186 22:15:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.186 22:15:23 -- common/autotest_common.sh@10 -- # set +x 00:29:12.186 ************************************ 00:29:12.186 END TEST nvmf_target_disconnect_tc1 00:29:12.186 ************************************ 00:29:12.186 22:15:23 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:12.186 22:15:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:12.186 22:15:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.186 22:15:23 -- common/autotest_common.sh@10 -- # set +x 00:29:12.186 ************************************ 00:29:12.186 START TEST nvmf_target_disconnect_tc2 00:29:12.186 ************************************ 00:29:12.186 22:15:23 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:12.186 22:15:23 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:29:12.186 22:15:23 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:12.186 22:15:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:12.186 22:15:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:12.186 22:15:23 -- common/autotest_common.sh@10 -- # set +x 00:29:12.186 22:15:23 -- nvmf/common.sh@469 -- # nvmfpid=2351201 00:29:12.186 22:15:23 -- nvmf/common.sh@470 -- # waitforlisten 2351201 00:29:12.186 22:15:23 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:12.186 22:15:23 -- common/autotest_common.sh@819 -- # '[' -z 2351201 ']' 00:29:12.186 22:15:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.186 22:15:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.186 22:15:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.186 22:15:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.186 22:15:23 -- common/autotest_common.sh@10 -- # set +x 00:29:12.186 [2024-07-26 22:15:23.303962] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:12.186 [2024-07-26 22:15:23.304011] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.186 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.186 [2024-07-26 22:15:23.389097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.444 [2024-07-26 22:15:23.427820] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:12.444 [2024-07-26 22:15:23.427925] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.444 [2024-07-26 22:15:23.427935] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.444 [2024-07-26 22:15:23.427944] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.444 [2024-07-26 22:15:23.428063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:12.444 [2024-07-26 22:15:23.428172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:12.444 [2024-07-26 22:15:23.428206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.444 [2024-07-26 22:15:23.428207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:13.009 22:15:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.009 22:15:24 -- common/autotest_common.sh@852 -- # return 0 00:29:13.009 22:15:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:13.009 22:15:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:13.009 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 22:15:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.009 22:15:24 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.009 22:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.009 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 Malloc0 00:29:13.009 22:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.009 22:15:24 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:13.009 22:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.009 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.009 [2024-07-26 22:15:24.208125] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15187d0/0x1524b40) succeed. 00:29:13.009 [2024-07-26 22:15:24.219151] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1519dc0/0x15c4c40) succeed. 00:29:13.267 22:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 22:15:24 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.267 22:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.267 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.267 22:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.267 22:15:24 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.268 22:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.268 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 22:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.268 22:15:24 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:13.268 22:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.268 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 [2024-07-26 22:15:24.362930] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:13.268 22:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.268 22:15:24 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:13.268 22:15:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:13.268 22:15:24 -- common/autotest_common.sh@10 -- # set +x 00:29:13.268 22:15:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.268 22:15:24 -- host/target_disconnect.sh@50 -- # reconnectpid=2351489 00:29:13.268 22:15:24 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:13.268 22:15:24 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:13.268 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.166 22:15:26 -- host/target_disconnect.sh@53 -- # kill -9 2351201 00:29:15.166 22:15:26 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Write completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 Read completed with error (sct=0, sc=8) 00:29:16.568 starting I/O failed 00:29:16.568 [2024-07-26 22:15:27.572972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:17.501 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2351201 Killed "${NVMF_APP[@]}" "$@" 00:29:17.501 22:15:28 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:17.501 22:15:28 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:17.501 22:15:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:17.501 22:15:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:17.501 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:29:17.501 22:15:28 -- nvmf/common.sh@469 -- # nvmfpid=2352078 00:29:17.501 22:15:28 -- nvmf/common.sh@470 -- # waitforlisten 2352078 00:29:17.501 22:15:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:17.501 22:15:28 -- common/autotest_common.sh@819 -- # '[' -z 2352078 ']' 00:29:17.502 22:15:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.502 22:15:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:17.502 22:15:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.502 22:15:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:17.502 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:29:17.502 [2024-07-26 22:15:28.442581] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:17.502 [2024-07-26 22:15:28.442639] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.502 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.502 [2024-07-26 22:15:28.544889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Read completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 Write completed with error (sct=0, sc=8) 00:29:17.502 starting I/O failed 00:29:17.502 [2024-07-26 22:15:28.578181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:17.502 [2024-07-26 22:15:28.582804] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:17.502 [2024-07-26 22:15:28.582906] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.502 [2024-07-26 22:15:28.582917] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.502 [2024-07-26 22:15:28.582925] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.502 [2024-07-26 22:15:28.583040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:17.502 [2024-07-26 22:15:28.583151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:17.502 [2024-07-26 22:15:28.583260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:17.502 [2024-07-26 22:15:28.583261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:18.067 22:15:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:18.067 22:15:29 -- common/autotest_common.sh@852 -- # return 0 00:29:18.067 22:15:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:18.067 22:15:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:18.067 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.067 22:15:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.067 22:15:29 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.067 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.067 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 Malloc0 00:29:18.325 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.325 22:15:29 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:18.325 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.325 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 [2024-07-26 22:15:29.336944] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cde7d0/0x1ceab40) succeed. 00:29:18.325 [2024-07-26 22:15:29.347300] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cdfdc0/0x1d8ac40) succeed. 00:29:18.325 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.325 22:15:29 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.325 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.325 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.325 22:15:29 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:18.325 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.325 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.325 22:15:29 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:18.325 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.325 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 [2024-07-26 22:15:29.489630] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:18.325 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.325 22:15:29 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:18.325 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:18.325 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:29:18.325 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:18.325 22:15:29 -- host/target_disconnect.sh@58 -- # wait 2351489 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Read completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Read completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Read completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Read completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Read completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.584 starting I/O failed 00:29:18.584 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Write completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 Read completed with error (sct=0, sc=8) 00:29:18.585 starting I/O failed 00:29:18.585 [2024-07-26 22:15:29.583353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 [2024-07-26 22:15:29.593746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.593796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.593819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.593829] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.593845] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.603921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.613737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.613779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.613801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.613810] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.613820] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.624047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.633806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.633846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.633863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.633873] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.633882] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.644140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.653773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.653817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.653835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.653845] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.653854] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.664008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.673821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.673861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.673877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.673887] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.673897] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.684083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.693922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.693967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.693986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.693996] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.694005] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.704073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.713984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.714026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.714045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.714054] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.714064] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.724141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.734006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.734053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.734071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.734080] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.734089] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.744466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.754087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.754131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.754148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.754158] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.754166] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.764429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.774171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.774210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.774226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.774236] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.774245] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.784426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.585 [2024-07-26 22:15:29.794219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.585 [2024-07-26 22:15:29.794261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.585 [2024-07-26 22:15:29.794277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.585 [2024-07-26 22:15:29.794287] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.585 [2024-07-26 22:15:29.794296] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.585 [2024-07-26 22:15:29.804438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.585 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.814302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.814343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.814360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.814370] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.814382] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.824538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.834251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.834294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.834312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.834322] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.834331] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.844564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.854436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.854481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.854497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.854507] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.854516] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.864636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.874440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.874477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.874495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.874505] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.874513] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.884924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.894408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.894450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.894467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.894476] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.894485] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.904724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.914559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.914606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.914629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.914639] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.914648] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.924836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.934665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.934702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.934720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.934730] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.934739] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.944786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.954722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.954765] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.954781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.954791] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.954800] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.964870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.974719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.974762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.974780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.974791] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.974800] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:29.985030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:29.994853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.845 [2024-07-26 22:15:29.994889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.845 [2024-07-26 22:15:29.994909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.845 [2024-07-26 22:15:29.994919] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.845 [2024-07-26 22:15:29.994928] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.845 [2024-07-26 22:15:30.005264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.845 qpair failed and we were unable to recover it. 00:29:18.845 [2024-07-26 22:15:30.014805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.846 [2024-07-26 22:15:30.014851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.846 [2024-07-26 22:15:30.014869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.846 [2024-07-26 22:15:30.014879] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.846 [2024-07-26 22:15:30.014889] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.846 [2024-07-26 22:15:30.025279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.846 qpair failed and we were unable to recover it. 00:29:18.846 [2024-07-26 22:15:30.035171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.846 [2024-07-26 22:15:30.035274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.846 [2024-07-26 22:15:30.035297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.846 [2024-07-26 22:15:30.035307] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.846 [2024-07-26 22:15:30.035317] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.846 [2024-07-26 22:15:30.045155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.846 qpair failed and we were unable to recover it. 00:29:18.846 [2024-07-26 22:15:30.055068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.846 [2024-07-26 22:15:30.055112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.846 [2024-07-26 22:15:30.055129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.846 [2024-07-26 22:15:30.055139] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.846 [2024-07-26 22:15:30.055148] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.846 [2024-07-26 22:15:30.065401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.846 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.075059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.075101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.075118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.075127] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.075137] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.085416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.095061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.095102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.095119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.095130] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.095139] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.105499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.115132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.115175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.115192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.115202] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.115211] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.125653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.135307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.135350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.135367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.135376] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.135385] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.145513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.155269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.155314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.155330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.155340] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.155349] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.165488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.175305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.175340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.175360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.175369] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.175378] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.185637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.195391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.195433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.195450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.195459] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.195468] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.205903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.215433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.215471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.215489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.215499] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.215508] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.225760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.235488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.115 [2024-07-26 22:15:30.235534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.115 [2024-07-26 22:15:30.235551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.115 [2024-07-26 22:15:30.235561] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.115 [2024-07-26 22:15:30.235570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.115 [2024-07-26 22:15:30.245976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.115 qpair failed and we were unable to recover it. 00:29:19.115 [2024-07-26 22:15:30.255575] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.116 [2024-07-26 22:15:30.255616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.116 [2024-07-26 22:15:30.255645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.116 [2024-07-26 22:15:30.255655] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.116 [2024-07-26 22:15:30.255667] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.116 [2024-07-26 22:15:30.266050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.116 qpair failed and we were unable to recover it. 00:29:19.116 [2024-07-26 22:15:30.275695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.116 [2024-07-26 22:15:30.275736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.116 [2024-07-26 22:15:30.275753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.116 [2024-07-26 22:15:30.275762] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.116 [2024-07-26 22:15:30.275771] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.116 [2024-07-26 22:15:30.286109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.116 qpair failed and we were unable to recover it. 00:29:19.116 [2024-07-26 22:15:30.295686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.116 [2024-07-26 22:15:30.295726] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.116 [2024-07-26 22:15:30.295742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.116 [2024-07-26 22:15:30.295752] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.116 [2024-07-26 22:15:30.295761] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.116 [2024-07-26 22:15:30.305978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.116 qpair failed and we were unable to recover it. 00:29:19.116 [2024-07-26 22:15:30.315766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.116 [2024-07-26 22:15:30.315811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.116 [2024-07-26 22:15:30.315830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.116 [2024-07-26 22:15:30.315840] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.116 [2024-07-26 22:15:30.315849] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.116 [2024-07-26 22:15:30.325988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.116 qpair failed and we were unable to recover it. 00:29:19.116 [2024-07-26 22:15:30.335824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.116 [2024-07-26 22:15:30.335866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.116 [2024-07-26 22:15:30.335883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.116 [2024-07-26 22:15:30.335893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.116 [2024-07-26 22:15:30.335901] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.375 [2024-07-26 22:15:30.346213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.375 qpair failed and we were unable to recover it. 00:29:19.375 [2024-07-26 22:15:30.355978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.375 [2024-07-26 22:15:30.356021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.375 [2024-07-26 22:15:30.356038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.375 [2024-07-26 22:15:30.356048] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.375 [2024-07-26 22:15:30.356057] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.375 [2024-07-26 22:15:30.366384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.375 qpair failed and we were unable to recover it. 00:29:19.375 [2024-07-26 22:15:30.375976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.375 [2024-07-26 22:15:30.376016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.375 [2024-07-26 22:15:30.376033] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.375 [2024-07-26 22:15:30.376043] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.375 [2024-07-26 22:15:30.376052] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.375 [2024-07-26 22:15:30.386319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.375 qpair failed and we were unable to recover it. 00:29:19.375 [2024-07-26 22:15:30.396084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.375 [2024-07-26 22:15:30.396129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.375 [2024-07-26 22:15:30.396146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.375 [2024-07-26 22:15:30.396156] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.375 [2024-07-26 22:15:30.396165] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.375 [2024-07-26 22:15:30.406515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.375 qpair failed and we were unable to recover it. 00:29:19.375 [2024-07-26 22:15:30.416043] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.375 [2024-07-26 22:15:30.416083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.375 [2024-07-26 22:15:30.416101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.375 [2024-07-26 22:15:30.416111] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.375 [2024-07-26 22:15:30.416120] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.375 [2024-07-26 22:15:30.426651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.375 qpair failed and we were unable to recover it. 00:29:19.375 [2024-07-26 22:15:30.436107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.375 [2024-07-26 22:15:30.436151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.375 [2024-07-26 22:15:30.436168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.375 [2024-07-26 22:15:30.436181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.375 [2024-07-26 22:15:30.436190] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.446636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.456141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.456185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.456202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.456212] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.456221] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.466589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.476211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.476257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.476275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.476284] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.476293] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.486516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.496307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.496350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.496367] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.496376] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.496386] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.506642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.516334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.516375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.516392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.516402] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.516411] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.526865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.536417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.536459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.536476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.536485] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.536494] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.546961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.556406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.556451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.556467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.556477] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.556486] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.566949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.576553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.576597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.576613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.576623] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.576637] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.376 [2024-07-26 22:15:30.586867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.376 qpair failed and we were unable to recover it. 00:29:19.376 [2024-07-26 22:15:30.596587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.376 [2024-07-26 22:15:30.596622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.376 [2024-07-26 22:15:30.596643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.376 [2024-07-26 22:15:30.596653] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.376 [2024-07-26 22:15:30.596661] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.635 [2024-07-26 22:15:30.607097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.635 qpair failed and we were unable to recover it. 00:29:19.635 [2024-07-26 22:15:30.616631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.635 [2024-07-26 22:15:30.616671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.635 [2024-07-26 22:15:30.616692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.635 [2024-07-26 22:15:30.616702] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.635 [2024-07-26 22:15:30.616711] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.635 [2024-07-26 22:15:30.627162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.635 qpair failed and we were unable to recover it. 00:29:19.635 [2024-07-26 22:15:30.636751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.635 [2024-07-26 22:15:30.636792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.635 [2024-07-26 22:15:30.636809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.635 [2024-07-26 22:15:30.636818] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.635 [2024-07-26 22:15:30.636827] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.635 [2024-07-26 22:15:30.647129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.635 qpair failed and we were unable to recover it. 00:29:19.635 [2024-07-26 22:15:30.656719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.635 [2024-07-26 22:15:30.656758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.635 [2024-07-26 22:15:30.656775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.635 [2024-07-26 22:15:30.656785] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.635 [2024-07-26 22:15:30.656794] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.635 [2024-07-26 22:15:30.667124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.635 qpair failed and we were unable to recover it. 00:29:19.635 [2024-07-26 22:15:30.676800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.635 [2024-07-26 22:15:30.676842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.635 [2024-07-26 22:15:30.676858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.635 [2024-07-26 22:15:30.676868] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.635 [2024-07-26 22:15:30.676878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.635 [2024-07-26 22:15:30.687246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.635 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.696926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.696969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.696985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.696994] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.697007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.707484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.716943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.716988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.717005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.717015] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.717024] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.727342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.737013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.737055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.737071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.737081] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.737090] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.747518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.757039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.757078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.757094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.757104] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.757113] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.767421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.777191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.777231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.777248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.777257] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.777266] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.787523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.797237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.797288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.797305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.797315] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.797324] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.807613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.817244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.817284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.817301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.817311] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.817319] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.827698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.837234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.837280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.837296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.837306] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.837315] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.636 [2024-07-26 22:15:30.847741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.636 qpair failed and we were unable to recover it. 00:29:19.636 [2024-07-26 22:15:30.857396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.636 [2024-07-26 22:15:30.857438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.636 [2024-07-26 22:15:30.857455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.636 [2024-07-26 22:15:30.857464] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.636 [2024-07-26 22:15:30.857473] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.867908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.877415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.877462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.877478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.877491] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.877500] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.887697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.897486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.897527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.897546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.897556] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.897565] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.907950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.917591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.917634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.917650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.917660] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.917669] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.927854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.937552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.937592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.937608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.937617] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.937632] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.948012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.957718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.957763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.957781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.957791] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.957800] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.968131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.977779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.977817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.977834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.977845] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.977856] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:30.988122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:30.997741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.895 [2024-07-26 22:15:30.997780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.895 [2024-07-26 22:15:30.997797] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.895 [2024-07-26 22:15:30.997806] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.895 [2024-07-26 22:15:30.997815] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.895 [2024-07-26 22:15:31.008276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.895 qpair failed and we were unable to recover it. 00:29:19.895 [2024-07-26 22:15:31.017816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.896 [2024-07-26 22:15:31.017857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.896 [2024-07-26 22:15:31.017874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.896 [2024-07-26 22:15:31.017884] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.896 [2024-07-26 22:15:31.017892] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.896 [2024-07-26 22:15:31.028195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.896 qpair failed and we were unable to recover it. 00:29:19.896 [2024-07-26 22:15:31.037845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.896 [2024-07-26 22:15:31.037884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.896 [2024-07-26 22:15:31.037901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.896 [2024-07-26 22:15:31.037910] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.896 [2024-07-26 22:15:31.037919] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.896 [2024-07-26 22:15:31.048335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.896 qpair failed and we were unable to recover it. 00:29:19.896 [2024-07-26 22:15:31.057915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.896 [2024-07-26 22:15:31.057953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.896 [2024-07-26 22:15:31.057972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.896 [2024-07-26 22:15:31.057982] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.896 [2024-07-26 22:15:31.057991] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.896 [2024-07-26 22:15:31.068179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.896 qpair failed and we were unable to recover it. 00:29:19.896 [2024-07-26 22:15:31.077984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.896 [2024-07-26 22:15:31.078023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.896 [2024-07-26 22:15:31.078040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.896 [2024-07-26 22:15:31.078050] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.896 [2024-07-26 22:15:31.078059] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.896 [2024-07-26 22:15:31.088351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.896 qpair failed and we were unable to recover it. 00:29:19.896 [2024-07-26 22:15:31.097925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.896 [2024-07-26 22:15:31.097966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.896 [2024-07-26 22:15:31.097983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.896 [2024-07-26 22:15:31.097993] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.896 [2024-07-26 22:15:31.098002] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:19.896 [2024-07-26 22:15:31.108474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:19.896 qpair failed and we were unable to recover it. 00:29:19.896 [2024-07-26 22:15:31.118117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.896 [2024-07-26 22:15:31.118161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.896 [2024-07-26 22:15:31.118178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.896 [2024-07-26 22:15:31.118188] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.896 [2024-07-26 22:15:31.118197] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.128460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.138126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.138167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.138184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.138193] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.138203] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.148478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.158280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.158314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.158330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.158340] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.158349] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.168584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.178245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.178286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.178302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.178311] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.178320] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.188714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.198378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.198415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.198431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.198441] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.198449] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.208435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.218432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.218472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.218489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.218499] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.218507] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.228890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.238440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.238481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.238497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.238507] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.238515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.248778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.258503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.258543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.258559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.258569] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.258577] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.268771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.278638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.278680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.278696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.278706] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.278715] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.288938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.155 qpair failed and we were unable to recover it. 00:29:20.155 [2024-07-26 22:15:31.298666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.155 [2024-07-26 22:15:31.298709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.155 [2024-07-26 22:15:31.298725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.155 [2024-07-26 22:15:31.298735] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.155 [2024-07-26 22:15:31.298744] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.155 [2024-07-26 22:15:31.309102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.156 qpair failed and we were unable to recover it. 00:29:20.156 [2024-07-26 22:15:31.318847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.156 [2024-07-26 22:15:31.318889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.156 [2024-07-26 22:15:31.318905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.156 [2024-07-26 22:15:31.318918] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.156 [2024-07-26 22:15:31.318927] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.156 [2024-07-26 22:15:31.329110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.156 qpair failed and we were unable to recover it. 00:29:20.156 [2024-07-26 22:15:31.338848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.156 [2024-07-26 22:15:31.338889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.156 [2024-07-26 22:15:31.338906] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.156 [2024-07-26 22:15:31.338916] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.156 [2024-07-26 22:15:31.338925] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.156 [2024-07-26 22:15:31.349115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.156 qpair failed and we were unable to recover it. 00:29:20.156 [2024-07-26 22:15:31.358885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.156 [2024-07-26 22:15:31.358930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.156 [2024-07-26 22:15:31.358946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.156 [2024-07-26 22:15:31.358956] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.156 [2024-07-26 22:15:31.358965] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.156 [2024-07-26 22:15:31.369091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.156 qpair failed and we were unable to recover it. 00:29:20.156 [2024-07-26 22:15:31.378880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.156 [2024-07-26 22:15:31.378918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.156 [2024-07-26 22:15:31.378935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.156 [2024-07-26 22:15:31.378945] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.156 [2024-07-26 22:15:31.378954] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.389264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.399048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.399085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.399102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.399111] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.399120] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.409352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.419083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.419121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.419138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.419148] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.419157] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.429367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.439208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.439245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.439263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.439272] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.439281] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.449372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.459198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.459244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.459261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.459271] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.459279] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.469499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.479306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.479348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.479365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.479374] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.479383] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.489549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.499319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.499359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.499378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.499388] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.499397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.509579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.519337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.519381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.519397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.519407] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.519416] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.529717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.539391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.539429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.539446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.539456] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.539464] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.549731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.559497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.559535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.559552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.559562] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.559571] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.569717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.579470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.579512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.579530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.579541] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.579550] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.416 [2024-07-26 22:15:31.589933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.416 qpair failed and we were unable to recover it. 00:29:20.416 [2024-07-26 22:15:31.599574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.416 [2024-07-26 22:15:31.599618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.416 [2024-07-26 22:15:31.599649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.416 [2024-07-26 22:15:31.599659] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.416 [2024-07-26 22:15:31.599669] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.417 [2024-07-26 22:15:31.609888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-07-26 22:15:31.619612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.417 [2024-07-26 22:15:31.619659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.417 [2024-07-26 22:15:31.619676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.417 [2024-07-26 22:15:31.619686] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.417 [2024-07-26 22:15:31.619696] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.417 [2024-07-26 22:15:31.629963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.417 qpair failed and we were unable to recover it. 00:29:20.417 [2024-07-26 22:15:31.639696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.417 [2024-07-26 22:15:31.639736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.417 [2024-07-26 22:15:31.639754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.417 [2024-07-26 22:15:31.639764] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.417 [2024-07-26 22:15:31.639773] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.676 [2024-07-26 22:15:31.649942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.659741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.659785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.659802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.659812] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.659821] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.669976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.679799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.679844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.679860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.679870] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.679878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.690189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.699927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.699966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.699983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.699993] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.700002] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.710122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.719839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.719879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.719896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.719905] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.719914] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.730178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.740067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.740108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.740125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.740134] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.740143] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.750400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.760174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.760216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.760232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.760242] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.760254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.770297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.780169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.780209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.780226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.780235] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.780244] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.790465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.800172] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.800213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.800230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.800239] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.800249] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.810425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.820138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.820187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.820204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.820213] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.820223] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.830445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.840237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.840275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.840292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.840301] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.840310] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.850487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.860357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.860399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.860415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.860425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.860433] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.870637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.880304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.880343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.880360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.880369] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.880378] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.677 [2024-07-26 22:15:31.890760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.677 qpair failed and we were unable to recover it. 00:29:20.677 [2024-07-26 22:15:31.900415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.677 [2024-07-26 22:15:31.900454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.677 [2024-07-26 22:15:31.900471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.677 [2024-07-26 22:15:31.900480] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.677 [2024-07-26 22:15:31.900489] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.937 [2024-07-26 22:15:31.910773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.937 qpair failed and we were unable to recover it. 00:29:20.937 [2024-07-26 22:15:31.920521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.937 [2024-07-26 22:15:31.920570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:31.920587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:31.920597] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:31.920606] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:31.931001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:31.940526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:31.940570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:31.940592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:31.940602] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:31.940611] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:31.950987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:31.960632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:31.960670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:31.960686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:31.960696] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:31.960705] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:31.970983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:31.980701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:31.980742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:31.980761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:31.980770] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:31.980779] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:31.990896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.000790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.000829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.000846] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.000856] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.000865] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.011140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.020778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.020813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.020829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.020839] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.020848] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.031100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.040815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.040852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.040868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.040878] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.040888] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.051261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.060843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.060885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.060901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.060910] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.060919] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.071335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.080958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.081003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.081019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.081029] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.081037] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.091304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.101057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.101099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.101116] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.101126] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.101136] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.111315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.121077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.121120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.121139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.121149] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.121159] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.131448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.141159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.141199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.141216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.141225] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.141234] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:20.938 [2024-07-26 22:15:32.151488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:20.938 qpair failed and we were unable to recover it. 00:29:20.938 [2024-07-26 22:15:32.161215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:20.938 [2024-07-26 22:15:32.161254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:20.938 [2024-07-26 22:15:32.161271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:20.938 [2024-07-26 22:15:32.161280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:20.938 [2024-07-26 22:15:32.161289] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.198 [2024-07-26 22:15:32.171619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.198 qpair failed and we were unable to recover it. 00:29:21.198 [2024-07-26 22:15:32.181200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.198 [2024-07-26 22:15:32.181239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.198 [2024-07-26 22:15:32.181255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.198 [2024-07-26 22:15:32.181264] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.198 [2024-07-26 22:15:32.181273] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.198 [2024-07-26 22:15:32.191598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.198 qpair failed and we were unable to recover it. 00:29:21.198 [2024-07-26 22:15:32.201353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.198 [2024-07-26 22:15:32.201392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.198 [2024-07-26 22:15:32.201408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.198 [2024-07-26 22:15:32.201418] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.198 [2024-07-26 22:15:32.201430] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.198 [2024-07-26 22:15:32.211744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.198 qpair failed and we were unable to recover it. 00:29:21.198 [2024-07-26 22:15:32.221359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.198 [2024-07-26 22:15:32.221399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.198 [2024-07-26 22:15:32.221415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.198 [2024-07-26 22:15:32.221425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.198 [2024-07-26 22:15:32.221434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.198 [2024-07-26 22:15:32.231835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.198 qpair failed and we were unable to recover it. 00:29:21.198 [2024-07-26 22:15:32.241444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.198 [2024-07-26 22:15:32.241485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.198 [2024-07-26 22:15:32.241502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.198 [2024-07-26 22:15:32.241513] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.241523] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.251878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.261518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.261558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.261575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.261584] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.261593] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.271878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.281567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.281607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.281623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.281637] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.281646] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.292086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.301691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.301733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.301751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.301761] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.301770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.311972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.321770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.321818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.321835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.321845] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.321854] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.332136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.341772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.341813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.341830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.341839] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.341848] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.352254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.361786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.361821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.361837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.361847] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.361856] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.372386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.381925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.381964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.381980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.381993] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.382002] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.392307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.402032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.402076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.402093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.402102] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.402111] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.199 [2024-07-26 22:15:32.412355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.199 qpair failed and we were unable to recover it. 00:29:21.199 [2024-07-26 22:15:32.422082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.199 [2024-07-26 22:15:32.422123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.199 [2024-07-26 22:15:32.422139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.199 [2024-07-26 22:15:32.422149] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.199 [2024-07-26 22:15:32.422158] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.459 [2024-07-26 22:15:32.432488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.459 qpair failed and we were unable to recover it. 00:29:21.459 [2024-07-26 22:15:32.442153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.459 [2024-07-26 22:15:32.442190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.459 [2024-07-26 22:15:32.442206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.459 [2024-07-26 22:15:32.442216] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.459 [2024-07-26 22:15:32.442225] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.459 [2024-07-26 22:15:32.452545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.459 qpair failed and we were unable to recover it. 00:29:21.459 [2024-07-26 22:15:32.462146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.459 [2024-07-26 22:15:32.462185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.459 [2024-07-26 22:15:32.462201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.459 [2024-07-26 22:15:32.462210] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.459 [2024-07-26 22:15:32.462219] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.459 [2024-07-26 22:15:32.472634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.459 qpair failed and we were unable to recover it. 00:29:21.459 [2024-07-26 22:15:32.482231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.459 [2024-07-26 22:15:32.482271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.459 [2024-07-26 22:15:32.482287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.459 [2024-07-26 22:15:32.482297] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.459 [2024-07-26 22:15:32.482306] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.459 [2024-07-26 22:15:32.492754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.459 qpair failed and we were unable to recover it. 00:29:21.459 [2024-07-26 22:15:32.502288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.459 [2024-07-26 22:15:32.502327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.459 [2024-07-26 22:15:32.502343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.459 [2024-07-26 22:15:32.502353] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.459 [2024-07-26 22:15:32.502361] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.459 [2024-07-26 22:15:32.512651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.459 qpair failed and we were unable to recover it. 00:29:21.459 [2024-07-26 22:15:32.522267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.459 [2024-07-26 22:15:32.522303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.459 [2024-07-26 22:15:32.522319] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.459 [2024-07-26 22:15:32.522329] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.459 [2024-07-26 22:15:32.522338] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.459 [2024-07-26 22:15:32.532654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.459 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.542414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.542456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.542472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.542482] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.542491] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.552772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.562390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.562429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.562449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.562458] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.562467] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.572896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.582440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.582483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.582499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.582509] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.582518] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.592976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.602618] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.602659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.602678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.602688] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.602697] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.612828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.622562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.622602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.622619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.622634] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.622642] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.633132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.642779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.642819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.642836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.642846] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.642858] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.653066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.662700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.662735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.662752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.662761] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.662770] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.460 [2024-07-26 22:15:32.673355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.460 qpair failed and we were unable to recover it. 00:29:21.460 [2024-07-26 22:15:32.682828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.460 [2024-07-26 22:15:32.682867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.460 [2024-07-26 22:15:32.682884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.460 [2024-07-26 22:15:32.682894] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.460 [2024-07-26 22:15:32.682903] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.720 [2024-07-26 22:15:32.693188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.720 qpair failed and we were unable to recover it. 00:29:21.720 [2024-07-26 22:15:32.702894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.720 [2024-07-26 22:15:32.702935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.720 [2024-07-26 22:15:32.702951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.720 [2024-07-26 22:15:32.702961] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.720 [2024-07-26 22:15:32.702969] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.720 [2024-07-26 22:15:32.713259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.720 qpair failed and we were unable to recover it. 00:29:21.720 [2024-07-26 22:15:32.722927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.720 [2024-07-26 22:15:32.722972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.720 [2024-07-26 22:15:32.722989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.720 [2024-07-26 22:15:32.723000] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.720 [2024-07-26 22:15:32.723009] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.720 [2024-07-26 22:15:32.733323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.720 qpair failed and we were unable to recover it. 00:29:21.720 [2024-07-26 22:15:32.743042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.720 [2024-07-26 22:15:32.743083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.720 [2024-07-26 22:15:32.743101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.720 [2024-07-26 22:15:32.743110] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.720 [2024-07-26 22:15:32.743119] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.720 [2024-07-26 22:15:32.753533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.720 qpair failed and we were unable to recover it. 00:29:21.720 [2024-07-26 22:15:32.763123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.720 [2024-07-26 22:15:32.763168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.720 [2024-07-26 22:15:32.763185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.720 [2024-07-26 22:15:32.763195] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.720 [2024-07-26 22:15:32.763204] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.720 [2024-07-26 22:15:32.773457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.720 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.783076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.783118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.783135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.783145] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.783154] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.793456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.803157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.803199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.803216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.803226] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.803235] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.813390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.823151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.823195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.823212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.823224] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.823233] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.833595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.843304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.843347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.843363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.843373] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.843382] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.853671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.863298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.863340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.863356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.863365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.863374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.873650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.883369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.883416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.883433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.883442] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.883452] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.894004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.903447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.903492] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.903508] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.903518] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.903527] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.913941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.923613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.923652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.923669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.923679] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.923688] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.721 [2024-07-26 22:15:32.933880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.721 qpair failed and we were unable to recover it. 00:29:21.721 [2024-07-26 22:15:32.943711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.721 [2024-07-26 22:15:32.943752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.721 [2024-07-26 22:15:32.943770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.721 [2024-07-26 22:15:32.943780] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.721 [2024-07-26 22:15:32.943789] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.981 [2024-07-26 22:15:32.954022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.981 qpair failed and we were unable to recover it. 00:29:21.981 [2024-07-26 22:15:32.963670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.981 [2024-07-26 22:15:32.963710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.981 [2024-07-26 22:15:32.963726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.981 [2024-07-26 22:15:32.963736] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.981 [2024-07-26 22:15:32.963744] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.981 [2024-07-26 22:15:32.974108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.981 qpair failed and we were unable to recover it. 00:29:21.981 [2024-07-26 22:15:32.983670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.981 [2024-07-26 22:15:32.983713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.981 [2024-07-26 22:15:32.983729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.981 [2024-07-26 22:15:32.983739] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.981 [2024-07-26 22:15:32.983748] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.981 [2024-07-26 22:15:32.994304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.981 qpair failed and we were unable to recover it. 00:29:21.981 [2024-07-26 22:15:33.003817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.981 [2024-07-26 22:15:33.003855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.981 [2024-07-26 22:15:33.003875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.981 [2024-07-26 22:15:33.003885] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.981 [2024-07-26 22:15:33.003894] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.981 [2024-07-26 22:15:33.014150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.981 qpair failed and we were unable to recover it. 00:29:21.981 [2024-07-26 22:15:33.023821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.981 [2024-07-26 22:15:33.023865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.981 [2024-07-26 22:15:33.023882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.981 [2024-07-26 22:15:33.023892] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.981 [2024-07-26 22:15:33.023902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.981 [2024-07-26 22:15:33.034315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.981 qpair failed and we were unable to recover it. 00:29:21.981 [2024-07-26 22:15:33.043987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.981 [2024-07-26 22:15:33.044032] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.981 [2024-07-26 22:15:33.044048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.981 [2024-07-26 22:15:33.044058] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.981 [2024-07-26 22:15:33.044067] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.981 [2024-07-26 22:15:33.054351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.981 qpair failed and we were unable to recover it. 00:29:21.981 [2024-07-26 22:15:33.064073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.981 [2024-07-26 22:15:33.064114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.981 [2024-07-26 22:15:33.064130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.981 [2024-07-26 22:15:33.064140] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.064149] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.074432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.084139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.084180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.084197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.084206] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.084215] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.094217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.104005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.104048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.104064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.104074] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.104083] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.114432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.124191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.124236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.124253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.124263] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.124272] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.134475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.144240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.144281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.144297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.144307] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.144316] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.154537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.164261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.164299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.164316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.164326] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.164335] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.174710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.184391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.184434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.184451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.184460] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.184469] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:21.982 [2024-07-26 22:15:33.194728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.982 qpair failed and we were unable to recover it. 00:29:21.982 [2024-07-26 22:15:33.204372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:21.982 [2024-07-26 22:15:33.204416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:21.982 [2024-07-26 22:15:33.204433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:21.982 [2024-07-26 22:15:33.204444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:21.982 [2024-07-26 22:15:33.204453] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.214755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.224498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.224534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.224551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.224561] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.224570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.234704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.244405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.244450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.244467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.244476] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.244486] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.254899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.264542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.264583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.264600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.264612] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.264621] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.274684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.284637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.284679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.284696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.284707] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.284716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.295187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.304580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.304620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.304641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.304651] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.304660] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.314898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.324690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.324733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.324750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.324760] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.324769] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.335044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.344818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.344861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.344877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.344887] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.344896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.355026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.364727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.364766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.364782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.364792] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.364801] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.375016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.384774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.384817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.384834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.384844] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.384853] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.395015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.404897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.404940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.404957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.404967] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.404976] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.415198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.424945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.424983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.425000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.240 [2024-07-26 22:15:33.425010] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.240 [2024-07-26 22:15:33.425019] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.240 [2024-07-26 22:15:33.435262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.240 qpair failed and we were unable to recover it. 00:29:22.240 [2024-07-26 22:15:33.445048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.240 [2024-07-26 22:15:33.445093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.240 [2024-07-26 22:15:33.445114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.241 [2024-07-26 22:15:33.445124] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.241 [2024-07-26 22:15:33.445133] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.241 [2024-07-26 22:15:33.455402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.241 qpair failed and we were unable to recover it. 00:29:22.241 [2024-07-26 22:15:33.465192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.241 [2024-07-26 22:15:33.465234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.241 [2024-07-26 22:15:33.465251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.241 [2024-07-26 22:15:33.465260] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.241 [2024-07-26 22:15:33.465269] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.475474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.485166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.485213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.485231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.485240] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.485250] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.495466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.505181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.505221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.505237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.505247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.505256] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.515536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.525236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.525281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.525298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.525307] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.525316] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.535587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.545264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.545308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.545324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.545334] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.545343] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.555583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.565387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.565429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.565446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.565456] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.565465] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.575686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.585412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.585451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.585467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.585477] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.585486] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.595829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.605484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.605527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.605544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.605554] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.605563] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.615658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.625432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.625478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.625496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.625506] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.625515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.635990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.645639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.645681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.645698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.499 [2024-07-26 22:15:33.645707] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.499 [2024-07-26 22:15:33.645717] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.499 [2024-07-26 22:15:33.655921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.499 qpair failed and we were unable to recover it. 00:29:22.499 [2024-07-26 22:15:33.665587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.499 [2024-07-26 22:15:33.665633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.499 [2024-07-26 22:15:33.665650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.500 [2024-07-26 22:15:33.665660] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.500 [2024-07-26 22:15:33.665669] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.500 [2024-07-26 22:15:33.676014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.500 qpair failed and we were unable to recover it. 00:29:22.500 [2024-07-26 22:15:33.685730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.500 [2024-07-26 22:15:33.685772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.500 [2024-07-26 22:15:33.685788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.500 [2024-07-26 22:15:33.685797] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.500 [2024-07-26 22:15:33.685807] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.500 [2024-07-26 22:15:33.695964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.500 qpair failed and we were unable to recover it. 00:29:22.500 [2024-07-26 22:15:33.705734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.500 [2024-07-26 22:15:33.705776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.500 [2024-07-26 22:15:33.705793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.500 [2024-07-26 22:15:33.705802] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.500 [2024-07-26 22:15:33.705814] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.500 [2024-07-26 22:15:33.716075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.500 qpair failed and we were unable to recover it. 00:29:22.758 [2024-07-26 22:15:33.725866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.758 [2024-07-26 22:15:33.725907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.758 [2024-07-26 22:15:33.725923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.758 [2024-07-26 22:15:33.725933] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.758 [2024-07-26 22:15:33.725942] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.758 [2024-07-26 22:15:33.736137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.758 qpair failed and we were unable to recover it. 00:29:22.758 [2024-07-26 22:15:33.745923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.758 [2024-07-26 22:15:33.745967] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.745984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.745994] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.746003] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.756318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.765953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.765995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.766012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.766021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.766030] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.776333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.786052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.786094] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.786111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.786121] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.786130] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.796502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.806090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.806129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.806146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.806156] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.806165] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.816567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.826163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.826203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.826220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.826229] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.826238] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.836569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.846162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.846205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.846221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.846230] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.846240] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.856566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.866179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.866219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.866236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.866246] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.866255] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.876615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.886305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.886343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.886362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.886372] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.886381] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.896613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.906311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.906360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.906376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.906386] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.906395] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.916800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.926392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.926436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.926454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.926463] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.926473] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.936658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.946534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.946578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.946595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.946604] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.946614] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.956938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:22.759 [2024-07-26 22:15:33.966601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:22.759 [2024-07-26 22:15:33.966645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:22.759 [2024-07-26 22:15:33.966661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:22.759 [2024-07-26 22:15:33.966671] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:22.759 [2024-07-26 22:15:33.966680] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:22.759 [2024-07-26 22:15:33.977023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:22.759 qpair failed and we were unable to recover it. 00:29:23.018 [2024-07-26 22:15:33.986669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.018 [2024-07-26 22:15:33.986710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.018 [2024-07-26 22:15:33.986727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.018 [2024-07-26 22:15:33.986736] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.018 [2024-07-26 22:15:33.986745] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.018 [2024-07-26 22:15:33.996928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-07-26 22:15:34.006699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.018 [2024-07-26 22:15:34.006740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.018 [2024-07-26 22:15:34.006756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.018 [2024-07-26 22:15:34.006766] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.018 [2024-07-26 22:15:34.006775] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.018 [2024-07-26 22:15:34.017044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-07-26 22:15:34.026712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.018 [2024-07-26 22:15:34.026757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.018 [2024-07-26 22:15:34.026775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.018 [2024-07-26 22:15:34.026785] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.018 [2024-07-26 22:15:34.026794] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.018 [2024-07-26 22:15:34.037150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-07-26 22:15:34.046763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.018 [2024-07-26 22:15:34.046805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.018 [2024-07-26 22:15:34.046821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.018 [2024-07-26 22:15:34.046831] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.018 [2024-07-26 22:15:34.046840] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.018 [2024-07-26 22:15:34.057256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.018 qpair failed and we were unable to recover it. 00:29:23.018 [2024-07-26 22:15:34.066828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.018 [2024-07-26 22:15:34.066866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.018 [2024-07-26 22:15:34.066886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.066896] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.066905] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.077376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.086934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.086974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.086991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.087000] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.087009] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.097355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.106966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.107004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.107021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.107031] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.107040] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.117200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.127119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.127158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.127175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.127184] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.127193] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.137424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.147180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.147219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.147236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.147245] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.147257] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.157490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.167180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.167226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.167243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.167252] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.167262] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.177464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.187240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.187278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.187294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.187304] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.187312] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.197586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.207249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.207284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.207302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.207311] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.207321] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.217619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.019 [2024-07-26 22:15:34.227327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.019 [2024-07-26 22:15:34.227369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.019 [2024-07-26 22:15:34.227386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.019 [2024-07-26 22:15:34.227396] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.019 [2024-07-26 22:15:34.227404] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.019 [2024-07-26 22:15:34.237764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.019 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.247353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.247397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.247413] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.247422] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.247431] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.257782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.267437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.267473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.267489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.267498] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.267507] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.277864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.287556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.287601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.287617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.287631] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.287641] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.297913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.307555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.307594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.307611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.307620] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.307634] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.317927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.327570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.327614] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.327635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.327648] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.327657] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.337933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.347692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.347736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.347753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.347762] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.347771] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.358095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.367739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.367778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.367794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.367804] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.367813] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.278 [2024-07-26 22:15:34.378101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.278 qpair failed and we were unable to recover it. 00:29:23.278 [2024-07-26 22:15:34.387826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.278 [2024-07-26 22:15:34.387866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.278 [2024-07-26 22:15:34.387883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.278 [2024-07-26 22:15:34.387893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.278 [2024-07-26 22:15:34.387902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.279 [2024-07-26 22:15:34.398171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.279 qpair failed and we were unable to recover it. 00:29:23.279 [2024-07-26 22:15:34.407736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.279 [2024-07-26 22:15:34.407779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.279 [2024-07-26 22:15:34.407795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.279 [2024-07-26 22:15:34.407805] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.279 [2024-07-26 22:15:34.407814] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.279 [2024-07-26 22:15:34.418188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.279 qpair failed and we were unable to recover it. 00:29:23.279 [2024-07-26 22:15:34.427858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.279 [2024-07-26 22:15:34.427900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.279 [2024-07-26 22:15:34.427917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.279 [2024-07-26 22:15:34.427926] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.279 [2024-07-26 22:15:34.427935] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.279 [2024-07-26 22:15:34.438238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.279 qpair failed and we were unable to recover it. 00:29:23.279 [2024-07-26 22:15:34.447968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.279 [2024-07-26 22:15:34.448005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.279 [2024-07-26 22:15:34.448021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.279 [2024-07-26 22:15:34.448031] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.279 [2024-07-26 22:15:34.448041] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.279 [2024-07-26 22:15:34.458408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.279 qpair failed and we were unable to recover it. 00:29:23.279 [2024-07-26 22:15:34.468035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.279 [2024-07-26 22:15:34.468077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.279 [2024-07-26 22:15:34.468093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.279 [2024-07-26 22:15:34.468103] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.279 [2024-07-26 22:15:34.468112] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.279 [2024-07-26 22:15:34.478294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.279 qpair failed and we were unable to recover it. 00:29:23.279 [2024-07-26 22:15:34.488040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.279 [2024-07-26 22:15:34.488089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.279 [2024-07-26 22:15:34.488105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.279 [2024-07-26 22:15:34.488115] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.279 [2024-07-26 22:15:34.488124] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.279 [2024-07-26 22:15:34.498559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.279 qpair failed and we were unable to recover it. 00:29:23.537 [2024-07-26 22:15:34.508297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.537 [2024-07-26 22:15:34.508334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.537 [2024-07-26 22:15:34.508355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.537 [2024-07-26 22:15:34.508365] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.537 [2024-07-26 22:15:34.508374] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.537 [2024-07-26 22:15:34.518679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.537 qpair failed and we were unable to recover it. 00:29:23.537 [2024-07-26 22:15:34.528348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.537 [2024-07-26 22:15:34.528390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.537 [2024-07-26 22:15:34.528407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.537 [2024-07-26 22:15:34.528417] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.537 [2024-07-26 22:15:34.528425] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.537 [2024-07-26 22:15:34.538687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.537 qpair failed and we were unable to recover it. 00:29:23.537 [2024-07-26 22:15:34.548380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.537 [2024-07-26 22:15:34.548420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.537 [2024-07-26 22:15:34.548436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.537 [2024-07-26 22:15:34.548445] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.537 [2024-07-26 22:15:34.548454] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.537 [2024-07-26 22:15:34.558758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.537 qpair failed and we were unable to recover it. 00:29:23.537 [2024-07-26 22:15:34.568381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.538 [2024-07-26 22:15:34.568423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.538 [2024-07-26 22:15:34.568439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.538 [2024-07-26 22:15:34.568449] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.538 [2024-07-26 22:15:34.568458] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.538 [2024-07-26 22:15:34.578922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.538 qpair failed and we were unable to recover it. 00:29:23.538 [2024-07-26 22:15:34.588454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.538 [2024-07-26 22:15:34.588497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.538 [2024-07-26 22:15:34.588513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.538 [2024-07-26 22:15:34.588523] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.538 [2024-07-26 22:15:34.588535] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.538 [2024-07-26 22:15:34.598919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.538 qpair failed and we were unable to recover it. 00:29:23.538 [2024-07-26 22:15:34.608634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.538 [2024-07-26 22:15:34.608673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.538 [2024-07-26 22:15:34.608690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.538 [2024-07-26 22:15:34.608699] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.538 [2024-07-26 22:15:34.608708] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.538 [2024-07-26 22:15:34.618956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.538 qpair failed and we were unable to recover it. 00:29:23.538 [2024-07-26 22:15:34.628561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:23.538 [2024-07-26 22:15:34.628600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:23.538 [2024-07-26 22:15:34.628616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:23.538 [2024-07-26 22:15:34.628633] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:23.538 [2024-07-26 22:15:34.628643] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:23.538 [2024-07-26 22:15:34.639117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:23.538 qpair failed and we were unable to recover it. 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Read completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 Write completed with error (sct=0, sc=8) 00:29:24.473 starting I/O failed 00:29:24.473 [2024-07-26 22:15:35.644309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:24.473 [2024-07-26 22:15:35.651367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.474 [2024-07-26 22:15:35.651408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.474 [2024-07-26 22:15:35.651427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.474 [2024-07-26 22:15:35.651437] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.474 [2024-07-26 22:15:35.651446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002c61c0 00:29:24.474 [2024-07-26 22:15:35.662082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:24.474 qpair failed and we were unable to recover it. 00:29:24.474 [2024-07-26 22:15:35.671692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.474 [2024-07-26 22:15:35.671736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.474 [2024-07-26 22:15:35.671753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.474 [2024-07-26 22:15:35.671763] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.474 [2024-07-26 22:15:35.671772] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002c61c0 00:29:24.474 [2024-07-26 22:15:35.682158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:24.474 qpair failed and we were unable to recover it. 00:29:24.474 [2024-07-26 22:15:35.691833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.474 [2024-07-26 22:15:35.691870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.474 [2024-07-26 22:15:35.691891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.474 [2024-07-26 22:15:35.691902] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.474 [2024-07-26 22:15:35.691911] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:24.733 [2024-07-26 22:15:35.702085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.733 qpair failed and we were unable to recover it. 00:29:24.733 [2024-07-26 22:15:35.711738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.733 [2024-07-26 22:15:35.711778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.733 [2024-07-26 22:15:35.711795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.733 [2024-07-26 22:15:35.711805] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.733 [2024-07-26 22:15:35.711813] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:24.733 [2024-07-26 22:15:35.722302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:24.733 qpair failed and we were unable to recover it. 00:29:24.733 [2024-07-26 22:15:35.722429] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:24.733 A controller has encountered a failure and is being reset. 00:29:24.733 [2024-07-26 22:15:35.731897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.733 [2024-07-26 22:15:35.731947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.733 [2024-07-26 22:15:35.731975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.733 [2024-07-26 22:15:35.731990] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.733 [2024-07-26 22:15:35.732003] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:24.733 [2024-07-26 22:15:35.742318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:24.733 qpair failed and we were unable to recover it. 00:29:24.733 [2024-07-26 22:15:35.751974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:24.733 [2024-07-26 22:15:35.752018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:24.733 [2024-07-26 22:15:35.752036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:24.733 [2024-07-26 22:15:35.752045] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:24.733 [2024-07-26 22:15:35.752055] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:24.733 [2024-07-26 22:15:35.762447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:24.733 qpair failed and we were unable to recover it. 00:29:24.733 [2024-07-26 22:15:35.762573] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:24.733 [2024-07-26 22:15:35.795432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:24.733 Controller properly reset. 00:29:24.733 Initializing NVMe Controllers 00:29:24.733 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.733 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:24.733 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:24.733 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:24.733 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:24.733 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:24.733 Initialization complete. Launching workers. 00:29:24.733 Starting thread on core 1 00:29:24.733 Starting thread on core 2 00:29:24.733 Starting thread on core 3 00:29:24.733 Starting thread on core 0 00:29:24.733 22:15:35 -- host/target_disconnect.sh@59 -- # sync 00:29:24.733 00:29:24.733 real 0m12.597s 00:29:24.733 user 0m27.131s 00:29:24.733 sys 0m3.282s 00:29:24.733 22:15:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.733 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:29:24.733 ************************************ 00:29:24.733 END TEST nvmf_target_disconnect_tc2 00:29:24.733 ************************************ 00:29:24.733 22:15:35 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:29:24.733 22:15:35 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:29:24.733 22:15:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:24.733 22:15:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:24.733 22:15:35 -- common/autotest_common.sh@10 -- # set +x 00:29:24.733 ************************************ 00:29:24.733 START TEST nvmf_target_disconnect_tc3 00:29:24.733 ************************************ 00:29:24.733 22:15:35 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc3 00:29:24.733 22:15:35 -- host/target_disconnect.sh@65 -- # reconnectpid=2353418 00:29:24.733 22:15:35 -- host/target_disconnect.sh@67 -- # sleep 2 00:29:24.733 22:15:35 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:29:24.992 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.898 22:15:37 -- host/target_disconnect.sh@68 -- # kill -9 2352078 00:29:26.898 22:15:37 -- host/target_disconnect.sh@70 -- # sleep 2 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Write completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 Read completed with error (sct=0, sc=8) 00:29:28.277 starting I/O failed 00:29:28.277 [2024-07-26 22:15:39.098917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:28.846 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 2352078 Killed "${NVMF_APP[@]}" "$@" 00:29:28.846 22:15:39 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:29:28.846 22:15:39 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:28.846 22:15:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:28.846 22:15:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:28.846 22:15:39 -- common/autotest_common.sh@10 -- # set +x 00:29:28.846 22:15:39 -- nvmf/common.sh@469 -- # nvmfpid=2354067 00:29:28.846 22:15:39 -- nvmf/common.sh@470 -- # waitforlisten 2354067 00:29:28.846 22:15:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:28.846 22:15:39 -- common/autotest_common.sh@819 -- # '[' -z 2354067 ']' 00:29:28.846 22:15:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.846 22:15:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.846 22:15:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.846 22:15:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.846 22:15:39 -- common/autotest_common.sh@10 -- # set +x 00:29:28.846 [2024-07-26 22:15:39.962448] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:28.846 [2024-07-26 22:15:39.962501] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.846 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.846 [2024-07-26 22:15:40.068955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Write completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 Read completed with error (sct=0, sc=8) 00:29:29.106 starting I/O failed 00:29:29.106 [2024-07-26 22:15:40.104211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:29.106 [2024-07-26 22:15:40.108017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:29.106 [2024-07-26 22:15:40.108128] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.106 [2024-07-26 22:15:40.108140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.106 [2024-07-26 22:15:40.108150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.106 [2024-07-26 22:15:40.108277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:29.106 [2024-07-26 22:15:40.108391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:29.106 [2024-07-26 22:15:40.108500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:29.106 [2024-07-26 22:15:40.108502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:29.674 22:15:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:29.675 22:15:40 -- common/autotest_common.sh@852 -- # return 0 00:29:29.675 22:15:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:29.675 22:15:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:29.675 22:15:40 -- common/autotest_common.sh@10 -- # set +x 00:29:29.675 22:15:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:29.675 22:15:40 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:29.675 22:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.675 22:15:40 -- common/autotest_common.sh@10 -- # set +x 00:29:29.675 Malloc0 00:29:29.675 22:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.675 22:15:40 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:29.675 22:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.675 22:15:40 -- common/autotest_common.sh@10 -- # set +x 00:29:29.675 [2024-07-26 22:15:40.868706] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8247d0/0x830b40) succeed. 00:29:29.675 [2024-07-26 22:15:40.879264] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x825dc0/0x8d0c40) succeed. 00:29:29.934 22:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.934 22:15:40 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.934 22:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.934 22:15:40 -- common/autotest_common.sh@10 -- # set +x 00:29:29.934 22:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.934 22:15:41 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:29.934 22:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.934 22:15:41 -- common/autotest_common.sh@10 -- # set +x 00:29:29.934 22:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.934 22:15:41 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:29:29.934 22:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.934 22:15:41 -- common/autotest_common.sh@10 -- # set +x 00:29:29.934 [2024-07-26 22:15:41.027003] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:29:29.934 22:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.934 22:15:41 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:29:29.934 22:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.934 22:15:41 -- common/autotest_common.sh@10 -- # set +x 00:29:29.934 22:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.934 22:15:41 -- host/target_disconnect.sh@73 -- # wait 2353418 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Read completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Read completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Read completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Read completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Read completed with error (sct=0, sc=8) 00:29:29.934 starting I/O failed 00:29:29.934 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Write completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 Read completed with error (sct=0, sc=8) 00:29:29.935 starting I/O failed 00:29:29.935 [2024-07-26 22:15:41.109227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.935 [2024-07-26 22:15:41.110838] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:29.935 [2024-07-26 22:15:41.110857] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:29.935 [2024-07-26 22:15:41.110866] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:31.374 [2024-07-26 22:15:42.114801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-26 22:15:42.116372] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:31.374 [2024-07-26 22:15:42.116389] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:31.374 [2024-07-26 22:15:42.116397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:31.942 [2024-07-26 22:15:43.120245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.942 qpair failed and we were unable to recover it. 00:29:31.942 [2024-07-26 22:15:43.121744] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:31.942 [2024-07-26 22:15:43.121761] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:31.942 [2024-07-26 22:15:43.121769] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:33.321 [2024-07-26 22:15:44.125548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.321 qpair failed and we were unable to recover it. 00:29:33.321 [2024-07-26 22:15:44.127138] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:33.321 [2024-07-26 22:15:44.127155] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:33.321 [2024-07-26 22:15:44.127163] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:34.258 [2024-07-26 22:15:45.131022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.258 qpair failed and we were unable to recover it. 00:29:34.258 [2024-07-26 22:15:45.132472] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:34.258 [2024-07-26 22:15:45.132489] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:34.258 [2024-07-26 22:15:45.132497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:35.198 [2024-07-26 22:15:46.136351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.198 qpair failed and we were unable to recover it. 00:29:35.198 [2024-07-26 22:15:46.137862] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:35.198 [2024-07-26 22:15:46.137881] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:35.198 [2024-07-26 22:15:46.137888] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:36.138 [2024-07-26 22:15:47.141878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.138 qpair failed and we were unable to recover it. 00:29:36.138 [2024-07-26 22:15:47.143325] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:36.138 [2024-07-26 22:15:47.143342] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:36.138 [2024-07-26 22:15:47.143352] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:37.076 [2024-07-26 22:15:48.147223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:37.076 qpair failed and we were unable to recover it. 00:29:37.076 [2024-07-26 22:15:48.148972] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:37.076 [2024-07-26 22:15:48.148998] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:37.076 [2024-07-26 22:15:48.149007] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:38.013 [2024-07-26 22:15:49.152756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.013 qpair failed and we were unable to recover it. 00:29:38.013 [2024-07-26 22:15:49.154215] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:38.013 [2024-07-26 22:15:49.154231] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:38.013 [2024-07-26 22:15:49.154240] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:38.948 [2024-07-26 22:15:50.158040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:38.948 qpair failed and we were unable to recover it. 00:29:38.948 [2024-07-26 22:15:50.158150] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:38.948 A controller has encountered a failure and is being reset. 00:29:38.948 Resorting to new failover address 192.168.100.9 00:29:38.948 [2024-07-26 22:15:50.159799] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:38.948 [2024-07-26 22:15:50.159827] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:38.948 [2024-07-26 22:15:50.159839] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:40.327 [2024-07-26 22:15:51.163694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:40.327 qpair failed and we were unable to recover it. 00:29:40.327 [2024-07-26 22:15:51.165235] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:40.327 [2024-07-26 22:15:51.165253] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:40.327 [2024-07-26 22:15:51.165261] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:41.264 [2024-07-26 22:15:52.169162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:41.264 qpair failed and we were unable to recover it. 00:29:41.264 [2024-07-26 22:15:52.169259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:41.264 [2024-07-26 22:15:52.169369] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:41.264 [2024-07-26 22:15:52.171294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:41.264 Controller properly reset. 00:29:42.201 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Read completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 Write completed with error (sct=0, sc=8) 00:29:42.202 starting I/O failed 00:29:42.202 [2024-07-26 22:15:53.215226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:42.202 Initializing NVMe Controllers 00:29:42.202 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.202 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:42.202 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:42.202 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:42.202 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:42.202 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:42.202 Initialization complete. Launching workers. 00:29:42.202 Starting thread on core 1 00:29:42.202 Starting thread on core 2 00:29:42.202 Starting thread on core 3 00:29:42.202 Starting thread on core 0 00:29:42.202 22:15:53 -- host/target_disconnect.sh@74 -- # sync 00:29:42.202 00:29:42.202 real 0m17.364s 00:29:42.202 user 0m59.498s 00:29:42.202 sys 0m5.663s 00:29:42.202 22:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.202 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.202 ************************************ 00:29:42.202 END TEST nvmf_target_disconnect_tc3 00:29:42.202 ************************************ 00:29:42.202 22:15:53 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:42.202 22:15:53 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:42.202 22:15:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:42.202 22:15:53 -- nvmf/common.sh@116 -- # sync 00:29:42.202 22:15:53 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:42.202 22:15:53 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:42.202 22:15:53 -- nvmf/common.sh@119 -- # set +e 00:29:42.202 22:15:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:42.202 22:15:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:42.202 rmmod nvme_rdma 00:29:42.202 rmmod nvme_fabrics 00:29:42.202 22:15:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:42.202 22:15:53 -- nvmf/common.sh@123 -- # set -e 00:29:42.202 22:15:53 -- nvmf/common.sh@124 -- # return 0 00:29:42.202 22:15:53 -- nvmf/common.sh@477 -- # '[' -n 2354067 ']' 00:29:42.202 22:15:53 -- nvmf/common.sh@478 -- # killprocess 2354067 00:29:42.202 22:15:53 -- common/autotest_common.sh@926 -- # '[' -z 2354067 ']' 00:29:42.202 22:15:53 -- common/autotest_common.sh@930 -- # kill -0 2354067 00:29:42.202 22:15:53 -- common/autotest_common.sh@931 -- # uname 00:29:42.202 22:15:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:42.202 22:15:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2354067 00:29:42.202 22:15:53 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:29:42.202 22:15:53 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:29:42.202 22:15:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2354067' 00:29:42.202 killing process with pid 2354067 00:29:42.202 22:15:53 -- common/autotest_common.sh@945 -- # kill 2354067 00:29:42.202 22:15:53 -- common/autotest_common.sh@950 -- # wait 2354067 00:29:42.462 22:15:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:42.462 22:15:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:42.462 00:29:42.462 real 0m39.871s 00:29:42.462 user 2m23.520s 00:29:42.462 sys 0m15.905s 00:29:42.462 22:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.462 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.462 ************************************ 00:29:42.462 END TEST nvmf_target_disconnect 00:29:42.462 ************************************ 00:29:42.721 22:15:53 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:42.721 22:15:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:42.721 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.721 22:15:53 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:42.721 00:29:42.721 real 21m57.647s 00:29:42.721 user 67m45.478s 00:29:42.721 sys 5m39.967s 00:29:42.721 22:15:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.721 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.721 ************************************ 00:29:42.721 END TEST nvmf_rdma 00:29:42.721 ************************************ 00:29:42.721 22:15:53 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:42.721 22:15:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:42.721 22:15:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:42.721 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.721 ************************************ 00:29:42.721 START TEST spdkcli_nvmf_rdma 00:29:42.721 ************************************ 00:29:42.721 22:15:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:42.721 * Looking for test storage... 00:29:42.721 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:42.721 22:15:53 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:42.721 22:15:53 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.721 22:15:53 -- nvmf/common.sh@7 -- # uname -s 00:29:42.721 22:15:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.721 22:15:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.721 22:15:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.721 22:15:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.721 22:15:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.721 22:15:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.721 22:15:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.721 22:15:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.721 22:15:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.721 22:15:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.721 22:15:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:42.721 22:15:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:42.721 22:15:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.721 22:15:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.721 22:15:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.721 22:15:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:42.721 22:15:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.721 22:15:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.721 22:15:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.721 22:15:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.721 22:15:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.721 22:15:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.721 22:15:53 -- paths/export.sh@5 -- # export PATH 00:29:42.721 22:15:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.721 22:15:53 -- nvmf/common.sh@46 -- # : 0 00:29:42.721 22:15:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:42.721 22:15:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:42.721 22:15:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:42.721 22:15:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.721 22:15:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.721 22:15:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:42.721 22:15:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:42.721 22:15:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:42.721 22:15:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:42.721 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.721 22:15:53 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:42.721 22:15:53 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2356522 00:29:42.721 22:15:53 -- spdkcli/common.sh@34 -- # waitforlisten 2356522 00:29:42.721 22:15:53 -- common/autotest_common.sh@819 -- # '[' -z 2356522 ']' 00:29:42.721 22:15:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.721 22:15:53 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:42.721 22:15:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:42.721 22:15:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.722 22:15:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:42.722 22:15:53 -- common/autotest_common.sh@10 -- # set +x 00:29:42.980 [2024-07-26 22:15:53.986024] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:42.980 [2024-07-26 22:15:53.986078] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356522 ] 00:29:42.980 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.980 [2024-07-26 22:15:54.071353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:42.980 [2024-07-26 22:15:54.109489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:42.980 [2024-07-26 22:15:54.109633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.980 [2024-07-26 22:15:54.109635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.918 22:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:43.918 22:15:54 -- common/autotest_common.sh@852 -- # return 0 00:29:43.918 22:15:54 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:43.918 22:15:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:43.918 22:15:54 -- common/autotest_common.sh@10 -- # set +x 00:29:43.918 22:15:54 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:43.918 22:15:54 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:43.918 22:15:54 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:43.918 22:15:54 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:43.918 22:15:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.918 22:15:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:43.918 22:15:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:43.918 22:15:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:43.918 22:15:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.918 22:15:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:43.918 22:15:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.918 22:15:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:43.918 22:15:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:43.918 22:15:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:43.918 22:15:54 -- common/autotest_common.sh@10 -- # set +x 00:29:52.046 22:16:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:52.046 22:16:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:52.046 22:16:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:52.046 22:16:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:52.046 22:16:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:52.046 22:16:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:52.046 22:16:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:52.046 22:16:03 -- nvmf/common.sh@294 -- # net_devs=() 00:29:52.046 22:16:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:52.046 22:16:03 -- nvmf/common.sh@295 -- # e810=() 00:29:52.046 22:16:03 -- nvmf/common.sh@295 -- # local -ga e810 00:29:52.046 22:16:03 -- nvmf/common.sh@296 -- # x722=() 00:29:52.046 22:16:03 -- nvmf/common.sh@296 -- # local -ga x722 00:29:52.046 22:16:03 -- nvmf/common.sh@297 -- # mlx=() 00:29:52.046 22:16:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:52.046 22:16:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.046 22:16:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:52.046 22:16:03 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:52.046 22:16:03 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:52.046 22:16:03 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:52.046 22:16:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:52.046 22:16:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:52.046 22:16:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:52.046 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:52.046 22:16:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:52.046 22:16:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:52.046 22:16:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:52.046 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:52.046 22:16:03 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:52.046 22:16:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:52.046 22:16:03 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:52.046 22:16:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:52.046 22:16:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.046 22:16:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:52.046 22:16:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.046 22:16:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:52.046 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:52.046 22:16:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.046 22:16:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:52.046 22:16:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.046 22:16:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:52.046 22:16:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.046 22:16:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:52.046 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:52.046 22:16:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.046 22:16:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:52.046 22:16:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:52.047 22:16:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:52.047 22:16:03 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:52.047 22:16:03 -- nvmf/common.sh@57 -- # uname 00:29:52.047 22:16:03 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:52.047 22:16:03 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:52.047 22:16:03 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:52.047 22:16:03 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:52.047 22:16:03 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:52.047 22:16:03 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:52.047 22:16:03 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:52.047 22:16:03 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:52.047 22:16:03 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:52.047 22:16:03 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:52.047 22:16:03 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:52.047 22:16:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:52.047 22:16:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:52.047 22:16:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:52.047 22:16:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:52.047 22:16:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:52.047 22:16:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@104 -- # continue 2 00:29:52.047 22:16:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@104 -- # continue 2 00:29:52.047 22:16:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:52.047 22:16:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:52.047 22:16:03 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:52.047 22:16:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:52.047 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:52.047 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:52.047 altname enp217s0f0np0 00:29:52.047 altname ens818f0np0 00:29:52.047 inet 192.168.100.8/24 scope global mlx_0_0 00:29:52.047 valid_lft forever preferred_lft forever 00:29:52.047 22:16:03 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:52.047 22:16:03 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:52.047 22:16:03 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:52.047 22:16:03 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:52.047 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:52.047 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:52.047 altname enp217s0f1np1 00:29:52.047 altname ens818f1np1 00:29:52.047 inet 192.168.100.9/24 scope global mlx_0_1 00:29:52.047 valid_lft forever preferred_lft forever 00:29:52.047 22:16:03 -- nvmf/common.sh@410 -- # return 0 00:29:52.047 22:16:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:52.047 22:16:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:52.047 22:16:03 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:52.047 22:16:03 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:52.047 22:16:03 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:52.047 22:16:03 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:52.047 22:16:03 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:52.047 22:16:03 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:52.047 22:16:03 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:52.047 22:16:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@104 -- # continue 2 00:29:52.047 22:16:03 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:52.047 22:16:03 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:52.047 22:16:03 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@104 -- # continue 2 00:29:52.047 22:16:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:52.047 22:16:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:52.047 22:16:03 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:52.047 22:16:03 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:52.047 22:16:03 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:52.047 22:16:03 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:52.047 192.168.100.9' 00:29:52.047 22:16:03 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:52.047 192.168.100.9' 00:29:52.047 22:16:03 -- nvmf/common.sh@445 -- # head -n 1 00:29:52.047 22:16:03 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:52.047 22:16:03 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:52.047 192.168.100.9' 00:29:52.047 22:16:03 -- nvmf/common.sh@446 -- # tail -n +2 00:29:52.047 22:16:03 -- nvmf/common.sh@446 -- # head -n 1 00:29:52.047 22:16:03 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:52.047 22:16:03 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:52.047 22:16:03 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:52.047 22:16:03 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:52.047 22:16:03 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:52.047 22:16:03 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:52.306 22:16:03 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:52.306 22:16:03 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:52.306 22:16:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:52.306 22:16:03 -- common/autotest_common.sh@10 -- # set +x 00:29:52.306 22:16:03 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:52.306 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:52.306 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:52.306 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:52.306 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:52.306 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:52.306 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:52.306 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:52.306 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:52.306 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:52.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:52.306 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:52.306 ' 00:29:52.565 [2024-07-26 22:16:03.630982] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:54.503 [2024-07-26 22:16:05.692754] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11edd40/0x131e7c0) succeed. 00:29:54.503 [2024-07-26 22:16:05.702518] rdma.c:2629:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11ef420/0x11fe640) succeed. 00:29:55.879 [2024-07-26 22:16:06.945662] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:58.414 [2024-07-26 22:16:09.120631] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:59.791 [2024-07-26 22:16:10.998898] rdma.c:3080:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:30:01.694 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:01.694 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:01.694 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:01.694 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:01.694 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:01.694 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:01.694 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:01.694 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:30:01.694 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:30:01.694 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:01.694 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:01.694 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:01.694 22:16:12 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:01.694 22:16:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.694 22:16:12 -- common/autotest_common.sh@10 -- # set +x 00:30:01.694 22:16:12 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:01.694 22:16:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:01.694 22:16:12 -- common/autotest_common.sh@10 -- # set +x 00:30:01.694 22:16:12 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:01.694 22:16:12 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:01.954 22:16:12 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:01.954 22:16:13 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:01.954 22:16:13 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:01.954 22:16:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.954 22:16:13 -- common/autotest_common.sh@10 -- # set +x 00:30:01.954 22:16:13 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:01.954 22:16:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:01.954 22:16:13 -- common/autotest_common.sh@10 -- # set +x 00:30:01.954 22:16:13 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:01.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:01.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:01.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:01.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:30:01.954 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:30:01.954 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:01.954 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:01.954 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:01.954 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:01.954 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:01.954 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:01.954 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:01.954 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:01.954 ' 00:30:07.222 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:07.222 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:07.222 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:07.222 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:07.222 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:30:07.222 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:30:07.222 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:07.222 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:07.222 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:07.222 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:07.222 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:07.222 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:07.222 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:07.222 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:07.222 22:16:18 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:07.222 22:16:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:07.222 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:07.222 22:16:18 -- spdkcli/nvmf.sh@90 -- # killprocess 2356522 00:30:07.222 22:16:18 -- common/autotest_common.sh@926 -- # '[' -z 2356522 ']' 00:30:07.222 22:16:18 -- common/autotest_common.sh@930 -- # kill -0 2356522 00:30:07.222 22:16:18 -- common/autotest_common.sh@931 -- # uname 00:30:07.222 22:16:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:07.222 22:16:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2356522 00:30:07.222 22:16:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:07.222 22:16:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:07.222 22:16:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2356522' 00:30:07.222 killing process with pid 2356522 00:30:07.222 22:16:18 -- common/autotest_common.sh@945 -- # kill 2356522 00:30:07.222 [2024-07-26 22:16:18.136416] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:07.222 22:16:18 -- common/autotest_common.sh@950 -- # wait 2356522 00:30:07.222 22:16:18 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:30:07.222 22:16:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:07.222 22:16:18 -- nvmf/common.sh@116 -- # sync 00:30:07.222 22:16:18 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:30:07.222 22:16:18 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:30:07.222 22:16:18 -- nvmf/common.sh@119 -- # set +e 00:30:07.222 22:16:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:07.222 22:16:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:30:07.222 rmmod nvme_rdma 00:30:07.222 rmmod nvme_fabrics 00:30:07.222 22:16:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:07.222 22:16:18 -- nvmf/common.sh@123 -- # set -e 00:30:07.222 22:16:18 -- nvmf/common.sh@124 -- # return 0 00:30:07.222 22:16:18 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:30:07.222 22:16:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:07.222 22:16:18 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:30:07.222 00:30:07.222 real 0m24.600s 00:30:07.222 user 0m52.304s 00:30:07.222 sys 0m7.430s 00:30:07.222 22:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.222 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:07.222 ************************************ 00:30:07.222 END TEST spdkcli_nvmf_rdma 00:30:07.222 ************************************ 00:30:07.222 22:16:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:07.222 22:16:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:07.481 22:16:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:30:07.481 22:16:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:30:07.481 22:16:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:30:07.481 22:16:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:30:07.481 22:16:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:30:07.481 22:16:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:30:07.481 22:16:18 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:30:07.481 22:16:18 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:30:07.481 22:16:18 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:30:07.481 22:16:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:07.481 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:07.481 22:16:18 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:30:07.481 22:16:18 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:30:07.481 22:16:18 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:30:07.481 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:30:14.071 INFO: APP EXITING 00:30:14.071 INFO: killing all VMs 00:30:14.071 INFO: killing vhost app 00:30:14.071 WARN: no vhost pid file found 00:30:14.071 INFO: EXIT DONE 00:30:17.363 Waiting for block devices as requested 00:30:17.363 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:17.363 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:17.363 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:17.363 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:17.622 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:17.622 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:17.622 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:17.879 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:17.879 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:17.879 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:17.879 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:18.138 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:18.138 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:18.138 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:18.399 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:18.399 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:18.399 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:22.634 Cleaning 00:30:22.634 Removing: /var/run/dpdk/spdk0/config 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:22.634 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:22.634 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:22.634 Removing: /var/run/dpdk/spdk1/config 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:22.634 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:22.634 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:22.634 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:22.634 Removing: /var/run/dpdk/spdk2/config 00:30:22.634 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:22.634 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:22.635 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:22.635 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:22.635 Removing: /var/run/dpdk/spdk3/config 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:22.635 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:22.635 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:22.635 Removing: /var/run/dpdk/spdk4/config 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:22.635 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:22.635 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:22.635 Removing: /dev/shm/bdevperf_trace.pid2172119 00:30:22.635 Removing: /dev/shm/bdevperf_trace.pid2274674 00:30:22.635 Removing: /dev/shm/bdev_svc_trace.1 00:30:22.635 Removing: /dev/shm/nvmf_trace.0 00:30:22.635 Removing: /dev/shm/spdk_tgt_trace.pid1997720 00:30:22.635 Removing: /var/run/dpdk/spdk0 00:30:22.635 Removing: /var/run/dpdk/spdk1 00:30:22.635 Removing: /var/run/dpdk/spdk2 00:30:22.635 Removing: /var/run/dpdk/spdk3 00:30:22.635 Removing: /var/run/dpdk/spdk4 00:30:22.635 Removing: /var/run/dpdk/spdk_pid1995096 00:30:22.635 Removing: /var/run/dpdk/spdk_pid1996372 00:30:22.635 Removing: /var/run/dpdk/spdk_pid1997720 00:30:22.635 Removing: /var/run/dpdk/spdk_pid1998343 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2004218 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2005696 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2005995 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2006291 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2006619 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2006889 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2007052 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2007318 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2007627 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2008482 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2011423 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2011810 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2012154 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2012292 00:30:22.635 Removing: /var/run/dpdk/spdk_pid2012875 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2013077 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2013460 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2013721 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2014016 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2014036 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2014325 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2014492 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2014968 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2015252 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2015540 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2015736 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2015902 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2015965 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2016233 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2016519 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2016754 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2016951 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2017106 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2017383 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2017653 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2017939 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2018211 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2018492 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2018717 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2018915 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2019076 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2019354 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2019622 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2019909 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2020177 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2020458 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2020642 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2020852 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2021039 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2021328 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2021594 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2021877 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2022153 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2022416 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2022568 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2022762 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2023011 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2023292 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2023564 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2023852 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2024123 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2024394 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2024567 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2024770 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2024996 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2025283 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2025549 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2025833 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2025907 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2026248 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2031641 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2133553 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2138433 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2149641 00:30:22.893 Removing: /var/run/dpdk/spdk_pid2155894 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2160654 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2161471 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2172119 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2172399 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2177188 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2183604 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2186348 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2198043 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2226673 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2230871 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2236115 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2272576 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2273584 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2274674 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2279530 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2287965 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2288979 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2289851 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2290791 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2291198 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2296243 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2296317 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2301551 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2302089 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2302641 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2303431 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2303444 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2306242 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2308318 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2310199 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2312084 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2313967 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2315858 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2322882 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2323442 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2325751 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2326967 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2334563 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2337505 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2343552 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2343788 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2350988 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2351489 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2353418 00:30:23.152 Removing: /var/run/dpdk/spdk_pid2356522 00:30:23.152 Clean 00:30:23.411 killing process with pid 1938358 00:30:41.500 killing process with pid 1938355 00:30:41.500 killing process with pid 1938357 00:30:41.500 killing process with pid 1938356 00:30:41.500 22:16:51 -- common/autotest_common.sh@1436 -- # return 0 00:30:41.500 22:16:51 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:30:41.500 22:16:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.500 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:41.500 22:16:51 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:30:41.500 22:16:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:41.500 22:16:51 -- common/autotest_common.sh@10 -- # set +x 00:30:41.500 22:16:51 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:41.500 22:16:51 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:41.500 22:16:51 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:41.500 22:16:51 -- spdk/autotest.sh@394 -- # hash lcov 00:30:41.500 22:16:51 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:41.500 22:16:51 -- spdk/autotest.sh@396 -- # hostname 00:30:41.500 22:16:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:41.500 geninfo: WARNING: invalid characters removed from testname! 00:30:59.588 22:17:07 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:59.588 22:17:10 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:00.966 22:17:11 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:02.343 22:17:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:03.721 22:17:14 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:05.658 22:17:16 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:31:07.034 22:17:17 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:07.034 22:17:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:07.034 22:17:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:07.034 22:17:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:07.034 22:17:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:07.034 22:17:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.034 22:17:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.034 22:17:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.034 22:17:18 -- paths/export.sh@5 -- $ export PATH 00:31:07.034 22:17:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:07.034 22:17:18 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:31:07.034 22:17:18 -- common/autobuild_common.sh@438 -- $ date +%s 00:31:07.034 22:17:18 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1722025038.XXXXXX 00:31:07.034 22:17:18 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1722025038.tf0HcI 00:31:07.034 22:17:18 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:31:07.034 22:17:18 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:31:07.034 22:17:18 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:31:07.034 22:17:18 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:31:07.034 22:17:18 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:07.035 22:17:18 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:07.035 22:17:18 -- common/autobuild_common.sh@454 -- $ get_config_params 00:31:07.035 22:17:18 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:31:07.035 22:17:18 -- common/autotest_common.sh@10 -- $ set +x 00:31:07.035 22:17:18 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:31:07.035 22:17:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:31:07.035 22:17:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.035 22:17:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:07.035 22:17:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:07.035 22:17:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:07.035 22:17:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:07.035 22:17:18 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:07.035 22:17:18 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:07.035 22:17:18 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:31:07.035 22:17:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:07.035 + [[ -n 1883772 ]] 00:31:07.035 + sudo kill 1883772 00:31:07.044 [Pipeline] } 00:31:07.062 [Pipeline] // stage 00:31:07.067 [Pipeline] } 00:31:07.084 [Pipeline] // timeout 00:31:07.089 [Pipeline] } 00:31:07.106 [Pipeline] // catchError 00:31:07.111 [Pipeline] } 00:31:07.128 [Pipeline] // wrap 00:31:07.134 [Pipeline] } 00:31:07.149 [Pipeline] // catchError 00:31:07.158 [Pipeline] stage 00:31:07.161 [Pipeline] { (Epilogue) 00:31:07.175 [Pipeline] catchError 00:31:07.177 [Pipeline] { 00:31:07.191 [Pipeline] echo 00:31:07.192 Cleanup processes 00:31:07.198 [Pipeline] sh 00:31:07.482 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.482 2379420 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.495 [Pipeline] sh 00:31:07.778 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:07.778 ++ grep -v 'sudo pgrep' 00:31:07.778 ++ awk '{print $1}' 00:31:07.778 + sudo kill -9 00:31:07.778 + true 00:31:07.790 [Pipeline] sh 00:31:08.073 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:08.073 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:31:14.635 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:31:17.935 [Pipeline] sh 00:31:18.220 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:18.220 Artifacts sizes are good 00:31:18.236 [Pipeline] archiveArtifacts 00:31:18.245 Archiving artifacts 00:31:18.445 [Pipeline] sh 00:31:18.732 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:31:18.747 [Pipeline] cleanWs 00:31:18.757 [WS-CLEANUP] Deleting project workspace... 00:31:18.758 [WS-CLEANUP] Deferred wipeout is used... 00:31:18.765 [WS-CLEANUP] done 00:31:18.767 [Pipeline] } 00:31:18.786 [Pipeline] // catchError 00:31:18.798 [Pipeline] sh 00:31:19.080 + logger -p user.info -t JENKINS-CI 00:31:19.090 [Pipeline] } 00:31:19.105 [Pipeline] // stage 00:31:19.111 [Pipeline] } 00:31:19.127 [Pipeline] // node 00:31:19.133 [Pipeline] End of Pipeline 00:31:19.185 Finished: SUCCESS